text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Implementation of a MEIoT Weather Station with Exogenous Disturbance Input Due to the emergence of the coronavirus disease (COVID 19), education systems in most countries have adapted and quickly changed their teaching strategy to online teaching. This paper presents the design and implementation of a novel Internet of Things (IoT) device, called MEIoT weather station, which incorporates an exogenous disturbance input, within the National Digital Observatory of Smart Environments (OBNiSE) architecture. The exogenous disturbance input involves a wind blower based on a DC brushless motor. It can be controlled, via Node-RED platform, manually through a sliding bar, or automatically via different predefined profile functions, modifying the wind speed and the wind vane sensor variables. An application to Engineering Education is presented with a case study that includes the instructional design for the least-squares regression topic for linear, quadratic, and cubic approximations within the Educational Mechatronics Conceptual Framework (EMCF) to show the relevance of this proposal. This work’s main contribution to the state-of-the-art is to turn a weather monitoring system into a hybrid hands-on learning approach thanks to the integrated exogenous disturbance input. Introduction Technological advancement derived from more than 50 years of Moore's Law has brought humanity into an era where the life cycles of products and technologies have been shortened. This advancement has also led to the knowledge generated by humanity doubling roughly every 2 years. For their part, educational systems are in a constant search for the adoption of new educational technologies that enhance their students' education, preparing them for the skills that will be required in the jobs of the new industrial era. The transformation of educational systems has been slow in most educational institutions. However, with the emergence of the coronavirus disease 2019 (COVID 19), education systems in most countries accelerated this adoption and quickly changed the teaching strategy to online teaching [1][2][3][4][5]. Some of these works propose methodologies that integrate skills and attitudes towards this new form of education. However, one of the main challenges is when a hands-on learning (HOL) approach is involved. HOL in engineering education traditionally has involved students interacting with an artifact in a laboratory [6,7]. The combination of HOL and the COVID 19 situation has accelerated the necessity to develop technology that allows this interaction to be done remotely. Years before the emergence of COVID 19, there had already been proposals for remote experiments or online laboratories such as [8] that presented a remote control system that allows experiments to be carried the development of the implementation of an exogenous disturbance input for the MEIoT weather station will be carried out in Section 3. Methodology The research presented in this work is an applied research that aims to develop an educational IoT device and a complete platform to offer an alternative for conducting a HOL approach in a remote access laboratory or hybrid education environment in the context of pandemic COVID-19. We collected the primary data directly to manipulate and control variables to determine cause and effect through a quantitative analysis focused on measuring, modifying, and interpreting the weather station variables' behavior. The aforementioned was carried out with a flexible design developed through a real data acquisition process within the OBNiSE architecture focused on applying educational mechatronics described in the following section. Moreover, an instructional design based on the educational mechatronics conceptual framework is presented to demonstrate how to use the collected real data and interact with the MEIoT weather station through a remote Laboratory. OBNiSE Architecture for Educational Mechatronics The imminent change in processes since the appearance of the fourth industrial revolution, also called Industry 4.0, has led to the creation of projects and processes based on the Internet of Things (IoT). The IoT allows to improve and create new systems and architectures according to the new requirements of this Industry 4.0. These technological changes have allowed this pandemic not to be an obstacle, especially for education, which has adapted to a model of one hundred percent virtual classes that allow students to learn through distance learning strategies. Several proposals have allowed educational institutions to adapt quickly to these changes. Among the most important works are those that use architectures or softwarebased on IoT. For instance, the IoT architecture called OBNiSE presented in [17] which is extended in this proposal to incorporate the educational component and describe the interaction of the elements in the architecture. Through OBNiSE layered composition, the description of any IoT system or application can be done from the integrated devices to the user application. As it was described in the previous paper, this architecture is composed of six layers (1) A device layer that contains all devices to collect information. (2) A network layer that manages tools, cables, and users. (3) A processing layer in charge of information processing and visualization of data. (4) A cloud layer that ensures the availability of information to users, devices, and applications. (5) An application layer allows the connection of services, applications, and systems through mobile devices. Finally, (6) a security layer supervises all layers allowing the secure data transmission and secures the information of the users within the systems. For more information please consult [17]. This proposal presents the OBNiSE architecture applied to education, combining the Internet of Things and Educational Mechatronics. This architecture is based on IoT and comprises layers; each layer is responsible for the functionality of a complete system or mobile application and contains tools, devices, and a web service. The OBNiSE architecture inspires the proposed IoT Educational mechatronics architecture in this section. The proposed architecture is presented in Figure 1. The OBNiSE IoT architecture for Educational Mechatronics involves several elements that interact at different levels of the system. For example, a Web system that includes data visualization from the MEIoT weather station's sensors, a training web that is part of the web, a MEIoT weather station with different data sensors, and users that are divided into two categories educators and participants. Each of them is described in detail below. 1. Web system. It is hosted in a cloud system, which can be accessed by educators. The web shows all the sensors' parameters and data that can be manipulated, the graphs of the real-time sensed data, the configuration of the MEIoT weather station, and the users' training times per session. This web system is stored in the OBNiSE. 2. Traning web. It is part of the central web system. However, it has limitations for the participants' view since it only allows them to view the MEIoT weather station's current information but does not allow to change its configurations. The training web updates the information sent by the participants in real-time. 3. MEIoT weather station. It is an IoT device that integrates temperature, relative humidity, barometric pressure, altitude, light, rainfall, wind speed, and wind direction sensors. It is worthwhile to mention that this work proposes the inclusion of an exogenous disturbance input for the wind speed and the wind vane sensors; this allows the modification and observation of the system's behavior itself. 4. Users. The system is composed of two types of users described as follows. • Educators. They have access to all the data and configuration of the MEIoT station and have the complete knowledge for handling the MEIoT station. Likewise, they allow or deny access to registered participants in the system. Educators also can modify the dashboard, data, time of actualization, among others. • Participants. These are students and people who are part of the mechatronics education course and require access to the platform to access the MEIoT weather station. The participants have limited access and only can modify some parameters allowed by the Educator for a defined time. The MEIoT weather station records the modifications made in user sessions directly from the Web system connected to the cloud and allows users to graph the data. As already mentioned at the beginning of this section, the architecture presented respects the OBNiSE architecture interactions. An exogenous disturbance input is introduced to modify the wind speed to provoke changes in the climatic station's behavior; this variable can be modified from a web, manually or with different profiles, to which participants will have access to do so. For this first stage, only this variable is considered, however, it is planned that in the near future, all the MEIoT weather station sensors' variables can be perturbed in a desirable manner with several exogenous disturbance inputs. Finally, when working with multiple simultaneous user sessions on the web, a queuing system is managed, in which each request or change is attended in a t time determined by the Educator, to control the requests made during a total time tt established time per session. During active sessions, each participant can make one or more changes to the exogenous disturbance input causing changes in the wind speed variable and view the changes in the updated variable. This change will be displayed until a new input is considered. Implementation of Exogenous Disturbance Input to the MEIoT Weather Station A set of actions between the user and system is considered in the design stage; these actions can be for testing or education purposes. As already mentioned at the beginning of this work, this proposal is focused on an educational environment adapted to the newly emerging technological needs due to the pandemic in which students require tools that allow them to acquire new knowledge of industry 4.0 with techniques of education 4.0. As described in [17], the MEIoT weather station comprises temperature, relative humidity, barometric pressure, altitude, light, rainfall, wind speed, and wind direction sensors, and a wind blower actuator was added to this work to generate an external disturbance while it is prepared to integrate more exogenous disturbances inputs. This disturbance is under the user control. The 3D model of the complete MEIoT weather station with an exogenous disturbances input can be seen in Figure 2. More details about the MEIoT weather station with exogenous disturbance input implementation can be found in the following subsections. The MEIoT Weather Station with Exogenous Disturbance Input Implementing the exogenous input to the MEIoT Weather Station converts the device in a facility with instruments and equipment for measuring atmospheric conditions and a set of manipulable exogenous disturbance sources. This information can be used for different purposes. As shown in Figure 3; participants and educators can modify entries to the MEIoT weather station using the Web system to monitor the sensors' behavior, providing learning for students. The 3D model of the MEIoT Weather Station with exogenous disturbance input is shown in Figure 2. Implementation of the MEIoT Weather Station with Exogenous Disturbance Input within the OBNiSE Architecture According to the architecture proposed in [17], which is composed of 6 layers (Devices, Network, Processing, Cloud, Applications, Security), it is crucial to have an architecture that meets the requirements of industry 4.0 with elements of the Internet of things and that in turn complies with security, adaptability, robustness, and availability characteristics. That is why this architecture proposal includes the elements described in the OBNiSE architecture and extends the elements presented in [17] to strengthen the educational part through an application and an environment for students with hybrid learning. These elements are integrated into the OBNiSE platform as follows: • Device Layer: The MEIoT weather station with exogenous disturbance input uses a microcontroller and a set of sensors described in [17] and a wind blower based on a DC brushless motor; described later in Section 3.3 and it is shown in Figure 4. • Network Layer: This layer considers tools, user profiles, and data accessibility and ensures communication between the architecture's devices and layers. The H-Bridge for actuating the blower is connected through a microprocessor dedicated Pulse Width Modulator (PWM) pin port with two digital signals for direction and enabling. The network layer is integrated by three elements: tools, user profiles, and data accessibility. Each item is composed as follows: -Tools: This item encompasses every tool required to connect the MEIoT weather station with its sensors, actuators, and application, from cables and connectors to the Virtual Network Computing (VNC). Any additional tool is also part of this element. - User profiles: Participants and educators were the two user-profiles defined. Participants can see the sensor's information using a PC or a mobile and manipulate the actuator output; educators, on the other hand, also have the possibility of modifying parameters within the MEIoT weather station; these profiles are explained in Section 3.4. - Data accessibility. This element defines the communication channel for the devices, users, and information. The used protocols are WiFi and the Message Queuing Telemetry Transport (MQTT) protocol for this implementation. • Processing Layer.The MEIoT weather station uses an ESP32 microprocessor with built-in WiFi capability to capture sensors' data, drive actuators, and communicate to the cloud. IBM's Watson IoT platform, a specialized platform for IoT, is employed for cloud computing. • Cloud Layer. The information and data are stored using a database in the cloud; a Node-Red based application, Figure 5, manages access to it. The data are available for the MEIoT weather station, the web system, and the OBNiSE architecture. The IBM Watson platform is currently used for information storage. • Applications Layer: For data visualization and exogenous disturbance input manipulation, a web application based on Node-RED was created.This application allows interaction with the MEIoT weather station and its exogenous disturbance input from a mobile device, computer, or the web. Section 3.4 explains it in more detail. • Security is used across devices, applications, data storage, network, and processing layers. The security allows only the Educator profile to modify, configure, and see the complete information about the MEIoT weather sensors and register participants who can enter to see the system's behavior. It is necessary to define modifications in the configurations for the data protection, the use of the application at the user profile and application levels, and the processing and the cloud data storage. The security implementation starts with the Watson IoT Platform, where unique organization IDs are assigned. IBM's Watson IoT Platform has three main security aspects, Transport Layer Security (TLS), authentication, and authorization. These aspects remain the same as in [17]. Watson IoT platform integrates two kinds of policies: connection and messaging policies that provide access control using a Client ID and a User ID. Access is allowed as long as these credentials are valid; the verification is done through the MTTQ protocol. The MEIoT weather station comprises a microchip microcontroller containing a Control Processing Unit (CPU) to process the information. The microcontroller is connected to the sensors to receive the sensed data and a motor-based actuator that can be activated and controlled (speed and spin direction) through PWM. The microcontroller is also connected to the cloud to upload the sensed data via WiFi. Figure 6 depicts the icon used for MEIoT weather station, adding the exogenous disturbance input. MEIoT Weather with Exogenous Disturbance Input Sensors and Actuator Sensors are described in [17]. Each one of them was analyzed and selected considering their features and how suitable they are for the application and considering price and availability. As it is mentioned in [17], the system requires three fundamental phases for its design and implementation process: cloud-based database design, communication between elements, and an open-source website platform. The Relational Model developed for this system was modified to add the PWM duty cycle and the sampling time. The website platform can also consult the database. We used an ESP32 microcontroller which had two xtensa 32-bits LX6 microprocessors in an Arduino based board; sensors are connected using the I2C communication bus, analog ports, and external interruptions. We used a dedicated PWM pin and two General Port Input-Output (GPIO) pins for direction and enable for driveing the exogenous disturbance. The embedded software was developed in C++ language, taking advantage of the dualcore. One core manages the sensors' connections and measurements while it also establish the exogenous disturbance input. Once the sampling time was reached (10 s for this implementation), this core created a JSON document with all their information, and passed it to the second core. The second core managed the internet connection; it sent the JSON document to IBM Watson IoT through publishing it in a specific MQTT topic. This core also subscribed to a specific topic to retrieve commands in a JSON document format. The electronics were arranged in a small plastic perforated enclosure mounted on a tripod for practical usage (see Figure 7), and could be moved to a specific location for running tests in different environments. Some different wind sources were analyzed for the exogenous disturbance input, a pedestal fan, a computer fan, Figure 4a, a mini desktop fan, Figure 4b, and an air blower, Figure 4c. One of the main features considered for selecting one was suitability, discarding the pedestal fan due to its dimensions, power consumption, a more complicated power stage, and higher price. The other three fans were very similar in their electrical characteristics, power stage, dimensions, and price. A field test was carried out to select one from the 3 remaining fans. The selection criteria were that the fan must generate a wind stream strong enough to stimulate the anemometer and a speed range wide enough so that a 10% difference in the power output registered a notable difference in the sensor's response. The air blower was the best option under the selection criteria. A Radox 510-752 blower model was employed, Figure 4c. It had a 3.84 W max energy consumption at 12 V, with a maximum speed of 3000 Revolutions Per Minute (RPM). The microcontroller used an open loop control to drive a dedicated PWM from the microcontroller, Figure 8. Finally, an HW-95 board with an L298N dual H-Brigde was used as a power stage to drive the blower motor. This exogenous disturbance source can generate disturbances for both wind speed and wind direction sensors that can stimulate those registered inputs' variability. Graphic User Interface GUI-MEIoT 2.0 Several applications can be used for the visualization and manipulation of the data of an IoT application. For the online platform, in a preview work, we used Grafana Cloud [17]. However, we had to change to the Node-RED on IBM Cloud because Grafana only allowed us to receive data from devices, and it was not designed to send commands or information to devices. Node-RED is a programming tool for connecting devices, APIs, and online services, while Grafana is for online database queries and management. Node-RED is built on Node.js, so it has access to more than 225,000 modules in the Node's package repository. The navigation of the website is intuitive and easy for the user to use. The platform also allows setting the framing and the start and end time to show the later graphics. The user also can download the proportions from the website in a Comma-Separated Value (CSV) format. With Node-RED, we can define different user roles to establish the resources each user can access. So far we have defined two roles Educator and User. The sensed data are stored in a MySQL database. Node-RED also presents a set of tools for consultation that are useful for data visualization and analysis. Figure 9 shows a dashboard screenshot from an 15 min experiment we made to test the sensors' response. In the upper row, from left to right, we can see the wind direction and wind speed; in the middle row temperature and relative humidity; and the bottom row shows barometric pressure and pluvial precipitation. Finally the exogenous disturbance input control is shown in the screenshot's upper right, where it can be controlled automatically via the profile functions or manually through the sliding bar. Light sensor measurement is not shown as it was not tested in our experiment. Application to Education Engineering within the Educational Mechatronics Framework Using MEIoT Weather Station with Exogenous Disturbance Input This work's educational collaboration scenario involves educators and participants engaged in interactive and dynamic synchronous sessions simultaneously in person and online, the so-called hybrid learning (see Figure 10). The connection between various actors is managed and facilitated by the OBNiSE, from where issues such as access, data availability, access to the MEIoT station, data visualization, dashboard configuration, data modifications, among other things, can be made. Each of the users (educator or participant) has an independent connection within the system that allows queries or modifications to the MEIoT weather station; the process that describes the connection and the visualization of the data is shown in Figure 11. As it can be seen, a participant request access to the web through a link shared by the educator or using a username and password; once the credentials are validated, the Web system displays the current data of the MEIoT weather station. The participant can then observe the parameter or parameters to be modified; in this case it is only one parameter, the wind speed. The parameter is sent over the network to the server. Once the server responds to the request, it updates the sensor's behavior and updates the data in the web system. The entire process of requesting and updating data is carried out in an estimated time of 5 min per participant request. The participant can modify the exogenous disturbance as many times as possible to observe the wind speed changes at the MEIoT station. The waiting time varies according to the number of users connected to the web. The server's current operation is as follows, each time a request is received, it is queued and served in the order of arrival. The connection process between the users with the website and the weather station is described in Figure 12. The exchange of messages between the user's device, the website, and the weather station allows the access and successful modification of the station parameters during the time the user is connected. The website allows students to visualize and understand the practical understanding, while the theoretical part is described in the next section. This model is adaptable to different scenarios and allows students to become familiar with technological platforms that enhance their knowledge with a practical part and a theoretical part that describes the applied knowledge. For this proposal, some knowledge, including Internet of Things, mechatronics, and algebra, to name a few, are considered. Figure 11. Activity diagram between participants, the web system, and the MEIoT weather station. The OBNiSE IoT architecture allows us to integrate and perform applications in several areas, such as Mobility, Health, Smart Cities, Technology, and Education. In particular, this proposal presents the application of engineering education that aims to develop skills and abilities required by the I4.0, and promote active learning using resources, existing academic spaces, practical activities, and mechatronic prototypes based on an innovative educational methodology: [30]. These courses represent an alternative to improve and reduce the gap between the current knowledge at schools and the new industry requirements. The instructional design based on the EMCF is presented below. Instructional Design The instructional design is focused on the subject of least squares regression, and the linear, quadratic and cubic approximations are presented. The EMCF involves three perspective entities: statistics (process) + Internet of Things (application) + MEIoT weather station (artifact). The mechatronic concept is presented and defined in [31] as: "For a set of points (x 1 , y 1 ),(x 1 , y 1 ),...,(x 1 , y 1 ), the least squares regression is given by f (x) = a 0 + a 1 x (linear approximation) f (x) = a 0 + a 1 x + a 2 x 2 (quadratic approximation) f (x) = a 0 + a 1 x + a 2 x 2 + a 3 x 3 (cubic approximation) that minimizes the sum of the squared error The pedagogical activities for the three levels with the selected perspective, which are concrete learning level, graphic learning level, and abstract learning level, are defined as follows. First, the educator turns on the MEIoT weather station with exogenous disturbance input to go through the session, and all the participants log in to the web system. • Concrete Learning Level (CLL). At this level, activities aimed at perceptual-motor characteristics should be designed using the MEIoT Weather Station with exogenous disturbance input (See Figure 13). The activities to perform at this level, considering only the wind speed variables at 10 s of sampling time, are described as follows. 1. The participant moves the slider "Manual" control from 0% to 100% to manually change the exogenous disturbance input of the MEIoT weather station and observe what is happening with the wind speed chart. Finally, click on the "STOP" button. 2. The participant send a linear profile to the MEIoT weather station by clicking the "Profile-linear" button. • Graphic Learning Level (GLL). At this level, activities aimed at the graphic (symbolic) representation of mechatronic concepts should be designed, taking reference to the concepts learned previously at the concrete learning level. The learning will gradually make the transition from the concrete to the abstract level. For the graphical level to be more significant, the online open-source platform Node-RED is used to display the wind speed collected data. The participant can visualize the sensor's wind speed dynamics and how wind speed value is increasing as time passes until reaching the final point of the introduced profile (see Figure 14). Tasks related to this level are described below. 1. The participant sends a linear profile to the MEIoT weather station by clicking in the "Profile-linear" button. 2. The participant visualizes the GUI-MEIoT to observe the past values and current value of the wind speed variable. 3. In a white paper sheet, the participant plots each wind speed value with points. 4. Starting from the first point and linking all points using smooth lines. 5. Download the .CSV file by clicking in the download icon. Remark: Here the participant moves the pencil to link all the points. Figure 15 depicts the resulting plot. The data collection registered by the participant is shown in (Table 1). • Abstract Learning Level (ALL). At this level, activities should be designed to gradually transition from symbolic concepts to abstract representation that includes mathematical equations. In many problems in the biological, physical, social sciences, and engineering, it is useful to describe the relationship between the same variables through a mathematical expression. A common way to do this is to adjust a curve between the various data points. This curve can be linear or quadratic or cubic, and so on. The goal is to find the curve of the specific type that fits "best" to the given n−data points comprising in the introduced linear profile (see Table 1), where x i = blower power, y i = wind speed (km/h) and f (x i ) = a 0 + a 1 x i = value of the approximation in x i . To find the least-squares regression line for a set of points, begin by forming the system of linear equations where the right-hand term, [y i − f (x i )], of each equation is thought of as the error in the approximation of y i by f (x i ). Then writing this error as e i = y i − f (x i ), we yield y 1 = (a 0 + a 1 x 1 ) + e 1 y 2 = (a 0 + a 1 x 2 ) + e 2 . . . Now, we define Then, the matrix form for linear approximation is given by Equation (3) solving the variable u from the equation we obtain the values of the coefficients a 0 and a 1 of the least squares approximation line as To find the line to best fits the points, first the mathematical objects has to be formed as it is presented in Equation (3). The linear approximation that best fits the points is y = 0.27178x − 14.220. Now, we can implement this in Excel for both the real wind speed data and the obtained linear approximation (see Figure 16). Moreover, Table 2 shows the sum of the squared errors made in the results of every single data and also the total sum of 3.7793. x i = Blower Power (%) Now, we have to find the best quadratic fit for the points. In order to do so, we have to define the following equation as in Equation (1). Then, we formed the vectors as the ones presented in equation (2) and solving for u we yield to. The quadratic approximation function that best fits the points is y = 0.0013717x 2 + 0.067895x − 7.0628 (see Figure 17). This approximation leads to a total sum of squared errors of 2.7989. Follow the same procedure for finding the best cubic fit for the points. Then, we formed the vectors as the ones presented in Equation (2) and solving for u we yield to The cubic approximation function that best fits the points is y = −0.00013541x 3 + 0.3139x 2 − 2.08x + 42.384 (see Figure 18). This approximation leads to a total sum of squared errors of 1.0203. Finally, the obtained total sums of squared errors for the linear, quadratic and cubic approximations has to be compared. It can be noted that the cubic approximation is the best fit for the real data. As a summary, Figure 19 shows the complete instructional design with the three main levels applying the EMCF. It is worth mentioning that the concrete level presenting the process to build the MEIoT weather station could be followed by any person with an engineering background and understand the required hardware. The open-source software is easy to use and understand for users; only simple configurations to the model are required. Finally, the proposal describes the least-squares regression procedure at the abstract level that can be easily extended for more polynomial functions and applications. Discussion This novel MEIoT weather station with exogenous disturbance input has been designed and developed as a compact device that allows monitoring weather variables and generates an input perturbance every 10 s in a defined period. The integration of an actuator to disturb a sensor input allows the interaction demanded by hands-on learning. Moreover, this weather station can be easily adapted to any environment without expensive infrastructure. Its conformance to the OBNiSE IoT architecture allows it to be securely accessed from any connected part of the world. While the proposed configuration does not yield a linear response to the generated exogenous disturbance input, this can be solved with the controller configuration. Additionally, only one exogenous disturbance was generated for this work, but it was enough to prove that an exogenous disturbance input can help manipulate the monitored system state. The exogenous disturbance input can be varied for the wind speed sensor. However, its construction limits it for a fixed disturbance in the wind direction sensor, so the wind direction sensor response was not considered in this work. Compared with other works, such as [12] where the data is stored locally. Alternatively, in [13,15], the goal was only measured data availability. In [16] uses ZigBee to transmit the station data wirelessly to a local display. Our proposal adapts well to engineering education by adding an interaction layer that is not found in other works. The Educational Mechatronics Framework guided us to apply the three learning construction levels, concrete, graphic, and abstract, and cover them with an application to education engineering that uses the MEIoT weather station with exogenous disturbance input as an artifact. In this sample application, the students can infer how to fit any polynomial curve to a data set using least square regression and analyze the better fit. We expect students to achieve or even overachieve their grades by applying this online experimentation tool, as [3] reported with their online education scheme. This novel MEIoT weather station with exogenous disturbance input within the OBNiSE IoT architecture has been tested with a pilot test. It is poised to be deployed with engineering students for a HOL approach in a remote access laboratory or hybrid education environment. The change from a database visualizer like Grafana [13,17] to a programming environment like Node-RED gave us more control for running experiments remotely, which several weather stations can not do. Our approach also represents a very low-cost investment for remote access laboratory equipment. While local sensor data capture and transmission is done in an MCU, the cloud processing is done with IBM cloud services; depending on the demand, a free lite service may be enough, and the GUI is developed in an open-source platform. Other approaches like [10], where the system is based on PLC and SCADA, or [11], where their user interface is developed in LabVIEW, involves high licenses cost or significant infrastructure investment. In [9], where a PC does the control interface with a LabVIEW program and user interface is running in an Apache web server, can also mean a high energy cost to have complete availability. The application of this MEIoT weather station with exogenous disturbance input in the educational environment and this pandemic situation due to SARS-CoV-2 will allow students to attend laboratory classes while respecting the isolation measures, as suggested by [1]. Once the pandemic passes, this development will remain a useful tool by enabling hybrid education by allowing students in remote locations to conduct guided laboratory practices in an environment that is not just a computer simulation. It also allows the student's independent work with laboratory equipment with full availability to further complement their education. Finally, this type of development can significantly appeal to educational institutions with several campuses by allowing a piece of laboratory equipment that can be remotely shared throughout the institution without moving the equipment or the students and advisors. Future work will involve integrating more exogenous disturbance inputs besides the one showcased here, such as heat, light, humidity, and sources. Additionally, to implement a moving base for the air blower to control the wind direction sensor's disturbance. Furthermore, tune the open-loop controller to achieve a linear response from the sensor-actuator interaction. Integrate video streaming to see in real-time the effects of the exogenous disturbance inputs in the sensors. Finally, create more education engineering applications red based on the MEIoT weather station with exogenous disturbance inputs. Data Availability Statement: The data presented in this study are available in Table 2. Acknowledgments: The authors want to thank the Mexican National Council of Science and Technology CONACYT for its support to the National Laboratory of Embedded Systems, Advanced Electronics Design and Micro-systems, (LN-SEDEAM by its initials in Spanish), projects number 282357, 293384, 299061 and 314841, and also for the scholarship 805876. Moreover, we want to thank the postgraduate student J. Antonio Nava-Pintor, for his valuable contribution to this work. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript:
8,167.6
2021-02-27T00:00:00.000
[ "Computer Science" ]
Preparation of Novel Organic Polymer Semiconductor and Its Properties in Transistors through Collaborative Theoretical and Experimental Approaches Conjugated polymer semiconductors based on donor–acceptor structures are commonly employed as core materials for optoelectronic devices in the field of organic electronics. In this study, we designed and synthesized a novel acceptor unit thiophene-vinyl-diketopyrrolopyrrole, named TVDPP, based on a four-step organic synthesis procedure. Stille coupling reactions were applied with high yields of polymerization of TVDPP with fluorinated thiophene (FT) monomer. The molecular weight and thermal stability of the polymers were tested and showed high molecular weight and good thermal stability. Theoretical simulation calculations and 2D grazing-incidence wide-angle X-ray scattering (GIWAXS) tests verified the planarity of the material and excellent stacking properties, which are favorable for achieving high carrier mobility. Measurements based on the polymer as an organic thin film transistor (OTFT) device were carried out, and the mobility and on/off current ratio reached 0.383 cm2 V−1 s−1 and 104, respectively, showing its great potential in organic optoelectronics. Introduction Organic semiconductor materials are potential materials for the preparation of optoelectronic devices due to their structure consisting of alternating single and double bonds, which allows for carrier transport across their conjugated systems [1].Unlike inorganic semiconductors, organic semiconductor materials have good chemical modifiability, which is conducive to the preparation of novel materials with extensive structures to meet the application specifications and requirements of different devices [2,3].At the same time, due to the preparation efficiency and flexibility of polymer materials, more and more research has been conducted on organic polymer systems [4].After two decades of development, a variety of material systems have been developed and applied to a wide range of devices, including organic field-effect transistors, organic light-emitting devices, organic solar cells, sensors, memory storage, and organic thermoelectric and other functional devices [5][6][7][8][9][10].In the case of organic transistors, for example, the primary performance indicators are electron and/or hole carrier transport and sensitive switching ratios [11,12].The core material unit of organic transistors is organic semiconductor material, which is generally classified into donor-acceptor, donor-donor, and acceptor-acceptor based on the chemical composition [13,14].Materials based on donor-acceptor are the most widely studied because they tend to be synthesized efficiently and with high yields [15][16][17].At the same time, this type of architecture facilitates the flexible adjustment of the material's frontline orbital energy levels, including the highest occupied molecular orbital (HOMO) energy level, the lowest unoccupied molecular orbital (LUMO) energy level, and the energy gap through the selection of appropriate donor and acceptor units and their ratio.Materials with appropriate energy levels bear a direct relationship to the type and speed of carrier migration [18]. Currently, the classical acceptor building blocks mostly contain imide structures, such as diketopyrrolopyrrole (DPP), isoindigo, perylene diimide, and so on [19].New structures such as bithiophene imide, benzodifurandione-based oligo (p-phenylene vinylene), etc., have been developed, which are also capable of acting as acceptor units through strong electron-withdrawing groups and electron-attracting functional groups contained in the molecule [20][21][22].It has to be mentioned that the design of completely new structures is both difficult and complicated, whereas chemical modification of classical structures is much more efficient and easier [2].As shown in Figure 1 below, researchers have carried out a multifaceted and systematic modification strategy to improve the performance of the material, using classical DPP as the main candidate [23].DPP monomers are inherently poorly soluble materials, which limits their material processing characteristics.Chemical modification of the N-position allows for easy introduction of side chains and also improves the solubility of the material.These include long aliphatic chains, alkyl chains with branches, siloxane chains, hydrocarbon chains with fluorine substituents, unsaturated hydrocarbon capped side chains, and so on [24][25][26][27].The introduction of side chains increases the solubility of the material while at the same time facilitating the regulation of the stacking pattern and distance between the main chains.In some instances, the formation of intermolecular conformational locks also facilitates carrier transport.Modifications directed at the main chain are often associated with another component or multicomponent electron donor (acceptor) and not with direct modifications to the DPP backbone.This is because DPP itself has no unoccupied reactive sites and the properties of the material have to be modulated by modifications to the thiophene groups directly attached to it.The β-site of the thiophene can be conveniently used to insert strong electron-absorbing groups such as fluorine atoms, chlorine atoms, cyano groups, etc., as well as to introduce electrondonating groups such as methoxy and methyl groups.At the same time, the thiophene attached to the DPP substrate can be replaced by a variety of aromatic rings, including furan, bithiophene, benzene, and pyridine rings, so that a variety of structural materials can be prepared [28,29].Our group has reported three types of DPP-based polymers this year and applied them as materials for hole transport, electron transport, and bipolar transport.The design concepts of these three structures are based on the introduction of copolymer units with electron-donating or electron-absorbing ability to modulate the device properties of the materials after the introduction of different side-chain groups to improve the solubility of the materials [30][31][32].It is worth mentioning that up to now, there is still no effective method to directly synthesize DPP-based monomer structures without any aromatic ring [33].Here, we propose a new synthetic idea and prepare a new structure of the molecular material and further introduce olefin to ensure the conjugated structure of the material. Organic thin film transistor (OTFT): devices were fabricated on 0.6 cm × 0.6 cm SiO 2 /Si substrates with a bottom-gate-bottom-contact (BGBC) configuration.The SiO 2 /Si substrates were blown dry with nitrogen after ultrasonic cleaning in deionized water, ethanol, and isopropanol for 7 min each, followed by deposition of the source and drain electrodes by thermal evaporation.The channel width/length of the field effect transistor devices were 1400/30 µm, respectively.PTVDPP-2FT was pre-dissolved in chlorobenzene at high temperature (60 • C).The polymer films were deposited onto the substrates by spincoating at a fixed spin-coating rate of 3000 rpm under ambient conditions.Subsequently, the polymer films were annealed at 150 • C for 10 min to remove the corresponding solvents. Characterization: The molecular weight and degree of dispersion of the PTVDPP-2FT was evaluated by high temperature gel permeation chromatography (PL-GPC220, Agilent, Santa Clara, CA, USA).Trichlorobenzene is used for the eluent, and polystyrene is used as a standard.A solution at a concentration of 0.1 mg/mL was passed through the column (2× PLgel 10 um MIXED-B) at a flow rate of 1.0 mL/min and tested at 150 • C. The thermal stability of the solid powders was measured by a thermal gravimetric analyzer (METTLER TGA, Greifensee, Switzerland) in a nitrogen atmosphere with a heating rate of 10 • C per minute at temperatures from 50 to 550 • C. Differential scanning calorimetry (NETZSCH, Waldkraiburg, Germany) was used to measure the reaction heat of the solid powders in a nitrogen atmosphere with a heating rate of 10 • C per minute, including the first heating process 1 (from room temperature to 250 • C), the first cooling process 2 (from 250 • C to room temperature) and the second heating process 3 (from room temperature to 250 • C).Elemental analysis is measured by the CHN mode of the organic elemental analyzer (FlashSmart, Thermo Scientific, Waltham, MA, USA).Photochemical characterization was performed on a UV-visible spectrometer (Cary5000, Agilent, Santa Clara, CA, USA).The concentration of the solution is approximately 0.3 mg/mL in chloroform.The polymer semiconductor solution was spin-coated onto a pre-cleaned quartz plate and then thermally annealed at 150 • C for 20 min.Electrochemical tests were carried out in acetonitrile solutions containing tetrabutylammonium hexafluorophosphate as an electrolyte.A Ag/AgCl electrode (Ag in a 0.01 mol/L KCl), glassy carbon electrode, and platinum electrode were applied as the reference electrode, working electrode, and counter electrode, respectively.A drop of 5 µL of polymer chloroform solution was deposited with a pipette gun onto a glassy carbon electrode and allowed to evaporate slowly (the circular hole of the electrode was 4 mm in diameter).OTFT transfer and output characterizations were carried out in a nitrogen glove box using a Keithley 4200 semiconductor characterization system (Keithley, Cleveland, OH, USA).2D-GIWAXS results were obtained on the 14B/15U beamline station of the Shanghai Synchrotron Radiation Facility.Atomic force microscopy measurements were carried out using a Nanoscope V instrument (Bruker, Mannheim, Germany).These samples were identical to those used in OTFT performance analysis. Synthesis Routes to Polymer PTVDPP-2FT through Stille Coupling Polymerization Scheme 1 enumerates the synthetic routes for the preparation of polymers from monomers; the steps and details of the synthesis of the raw materials and intermediates numbered 1 to 4 are described above.The intermediate 4 with two five-membered heterocycles obtained by cyclization show excellent planarity and a fused structure.In addition to this, the intra-molecular inclusion of two carbonyl groups (C=O) leads to strong electron-withdrawing properties of this type of material, which is beneficial for the formation of polymers with alternating acceptor-donor type arrangements.The nitrogen positions of the side chains are introduced with long alkyl chains, which on the one hand increases the solubility and subsequent processability of the material, and on the other hand does not drastically affect the intermolecular stacking distances.TVDPP, further prepared via step 4, is a promising precursor for the Stille coupling reaction.The conformation of the olefin in TVDPP is adopted in the trans-form, which was obtained by the calculation of the coupling constants in the NMR pattern.The insertion of the olefin itself has no negative effect on the planarity of the material, while the rigid structure facilitates the efficient transport of carriers.It is worth mentioning that the introduction of olefins has a significant effect on the increase of the conjugation length of the monomer and the polymer.The monomer TVDPP can be easily copolymerized with the bis-trimethyltin-containing monomer 2FT under palladium catalysis.The schematic representation of the polymer with alternating ordering of the single and double bonds is highlighted in red.Common polymerization reactions rely on chlorobenzene as the reaction solvent, which is related to the poor solubility of the material.In this study, we utilized tetrahydrofuran (THF) as the solvent for the polymerization reaction and successfully prepared materials of suitable molecular weight.THF is a green and environmentally clean solvent; environmental and human hazards are thus reduced as compared to toxic reagents containing chlorine.The second method of molecular weight control is based on the regula- Common polymerization reactions rely on chlorobenzene as the reaction solvent, which is related to the poor solubility of the material.In this study, we utilized tetrahy-drofuran (THF) as the solvent for the polymerization reaction and successfully prepared materials of suitable molecular weight.THF is a green and environmentally clean solvent; environmental and human hazards are thus reduced as compared to toxic reagents containing chlorine.The second method of molecular weight control is based on the regulation of the polymerization time.The molecular weight of the macromolecules increases as the polymerization time increases, causing them to precipitate in the THF solvent, which slows down and stops the reaction.The molecular weights of the polymers were evaluated, and the number average molecular weight (Mn) and weight average molecular weight (Mw) were found to be 16.97 kDa and 35.78 kDa.The average degree of polymerization of the final material was found to be 16 and 33 based on the ratio of the Mn or Mw value of the polymer to the molecular weight of the smallest repeating unit, respectively.The dispersion (Ð) of the material was found to be good, with a value of only 2.10 based on the ratio of Mw to Mn.It can be found that most of the polymers used for organic semiconductor applications show dispersions in the range of 2.0−3.5, which is related to their degree of polymerization and the purification methods.Polymers with extremely long and average length chains are more chaotic, while average-length polymer chains cannot be decontaminated by the Soxhlet extraction technique due to their poor solubility in normal organic solvents.The specific results of the molecular weight of the polymers are presented in Figure S9.The purity of the final product can be deduced from the elemental analysis of the polymer.The proportions of the three elements C, H, and N contained in the polymer are within tolerance of the theoretical composition of the proportions of the elements that should be expected in the smallest repeating unit (Table 1).The thermal behavior of the polymer remains almost unchanged at 350 • C, with a loss of less than 0.05% of the initial mass fractions, as shown in Figure 2. The temperature at which it loses 10% of its mass is 375 • C, demonstrating its good thermal stability.The results of differential scanning calorimetry (DSC) showed no significant phase change in the range of 30−250 • C (Figure S10).proportions of the elements that should be expected in the smallest repeating unit (Table 1).The thermal behavior of the polymer remains almost unchanged at 350 °C, with a loss of less than 0.05% of the initial mass fractions, as shown in Figure 2. The temperature at which it loses 10% of its mass is 375 °C, demonstrating its good thermal stability.The results of differential scanning calorimetry (DSC) showed no significant phase change in the range of 30−250 °C (Figure S10). Density Functional Theory Calculation Polymers are difficult to culture into crystals; we used density functional theory (DFT) to study the theoretical configurations of the materials.The theoretical calculations regarding the planarity of the materials with respect to the front molecular orbital energy levels are performed with geometry optimization under the B3LYP/6-31G(d) basis group [34,35].In order to simplify the calculations and reduce the computational cost, the long alkyl chain at the N-position was simplified to a short methyl chain.The non-conjugated ing the planarity of the materials with to the front molecular orbital energy levels are performed with geometry optimization under the B3LYP/6-31G(d) basis group [34,35].In order to simplify the calculations and reduce the computational cost, the long alkyl chain at the N-position was simplified to a short methyl chain.The non-conjugated composition of the side chains barely affects the energy levels of the structures conjugated on the main chain.On the other hand, the investigation used dimers as samples to simulate the long conjugated structures, a method accepted in many studies, which facilitates a significant reduction of the computational time. As shown in the top view of the material in Figure 3a, the dimer structure exhibits good planarity, which is consistent with the envisaged characterization of the rigidity of the fused five-membered ring and the double bond.The thiophene was used as a bridging group to the two olefinic structural units, and the combined trithiophene-like structures show a head-to-tail linkage pattern.We measured the dihedral angles between thiophene and thiophene, which were overall less than 1 • ; this fully ensures the planar regularity of the polymer system.The geometrical configuration of the DFT-optimized dimer is almost planar.The good co-planarity effectively enhances π−π stacking and improves intramolecular and intermolecular charge transport and thus facilitates the achievement of high mobility.Both HOMO and LUMO are fully delocalized on the dual DPP acceptor moiety and the π−conjugated bridge donor segment.HOMO mainly occupies the double bonds of the molecule and is well separated from the domains throughout the horizontal backbone of the molecule.LUMO is mainly located in the DPP unit of the molecular chain and shows a strong dispersion; therefore, it is more beneficial for the transport of hole carriers.According to theoretical calculations, the HOMO and LUMO energy levels of the dimer are −4.73 eV and −3.10 eV, respectively, corresponding to an energy gap of 1.63 eV. Photochemical Properties and Electrochemical Properties In order to investigate the photochemical absorption properties of the polymer, its UV-visible absorption profiles in chloroform solution and in the solid phase of the film were tested.PTVDPP-2FT, similar to most polymers with structures based on donor-acceptor, shows a dual absorption band.The lower wavelength absorption band is between 350 nm and 450 nm, which is due to the π−π* transition (Figure 4).The higher wavelength low-energy absorption is between 600 nm and 1000 nm, which is the result of charge transfer between the donor and acceptor within the molecule.The two maximum absorption peaks presented in the solution state occur at 445 nm and 750 nm, respectively, and show almost no absorption beyond 1000 nm.Compared to the absorption in the solution phase, the maximum absorption peak of the polymer in the solid film state undergoes a redshift to 765 nm, which is related to the stacking of the polymer in the solid state.Based on the position of the onset absorption peak, the bandgap of the polymer can be estimated to be 1.35 eV.The narrow bandgap is determined by the fact that it features long conjugated main chains, which is favorable for the formation of materials with high carrier mobility. Photochemical Properties and Electrochemical Properties In order to investigate the photochemical absorption properties of the polymer, its UV-visible absorption profiles in chloroform solution and in the solid phase of the film were tested.PTVDPP-2FT, similar to most polymers with structures based on donor-acceptor, shows a dual absorption band.The lower wavelength absorption band is between 350 nm and 450 nm, which is due to the π−π* transition (Figure 4).The higher wavelength lowenergy absorption is between 600 nm and 1000 nm, which is the result of charge transfer between the donor and acceptor within the molecule.The two maximum absorption peaks presented in the solution state occur at 445 nm and 750 nm, respectively, and show almost no absorption beyond 1000 nm.Compared to the absorption in the solution phase, the maximum absorption peak of the polymer in the solid film state undergoes a redshift to 765 nm, which is related to the stacking of the polymer in the solid state.Based on the position of the onset absorption peak, the bandgap of the polymer can be estimated to be 350 nm and 450 nm, which is due to the π−π* transition (Figure 4).The higher wavelength low-energy absorption is between 600 nm and 1000 nm, which is the result of charge transfer between the donor and acceptor within the molecule.The two maximum absorption peaks presented in the solution state occur at 445 nm and 750 nm, respectively, and show almost no absorption beyond 1000 nm.Compared to the absorption in the solution phase, the maximum absorption peak of the polymer in the solid film state undergoes a redshift to 765 nm, which is related to the stacking of the polymer in the solid state.Based on the position of the onset absorption peak, the bandgap of the polymer can be estimated to be 1.35 eV.The narrow bandgap is determined by the fact that it features long conjugated main chains, which is favorable for the formation of materials with high carrier mobility.In order to further measure the redox properties of the materials, the polymers were measured in the thin film state using cyclic voltammetry (Figure 5).The material shows a more pronounced oxidation peak compared to its reduction curve, which to some extent indicates that the material has a more prominent ability to be oxidized and is suitable for use as the P-type hole transport material.The onset of the oxidation peak is at 1.03 V, which corresponds to the HOMO energy level of the material being near −5.47 eV.Compared to the oxidation curve, its reduction peak exhibits quasi-reversibility.Based on the onset of the reduction peak, the LUMO energy level can be inferred to be −3.52 eV. Polymers 2023, 15, x FOR PEER REVIEW 9 of 14 In order to further measure the redox properties of the materials, the polymers were measured in the thin film state using cyclic voltammetry (Figure 5).The material shows a more pronounced oxidation peak compared to its reduction curve, which to some extent indicates that the material has a more prominent ability to be oxidized and is suitable for use as the P-type hole transport material.The onset of the oxidation peak is at 1.03 V, which corresponds to the HOMO energy level of the material being near −5.47 eV.Compared to the oxidation curve, its reduction peak exhibits quasi-reversibility.Based on the onset of the reduction peak, the LUMO energy level can be inferred to be −3.52 eV. OTFT Device Performance In order to characterize the charge transport behavior of these materials, we fabricated bottom-gate-bottom-contact (BGBC)-structured OTFT devices based on PTVDPP-2FT, and the corresponding device configuration is shown in Figure 6a.The deliberate selection of gold (Au) electrodes as part of the device is related to the fact that the HOMO energy levels of the polymer, obtained through electrochemical measurements, match those of the Au electrodes.Owing to the poor solubility of the polymer in THF solvent and the volatility of THF itself, the prepared polymer films were not homogeneous.On the other hand, the solubility of the polymer in chlorobenzene was much better, approximately 10 mg/mL, and therefore chlorobenzene was finally adopted.The saturation mobility (µ) is calculated as follows: OTFT Device Performance In order to characterize the charge transport behavior of these materials, we fabricated bottom-gate-bottom-contact (BGBC)-structured OTFT devices based on PTVDPP-2FT, and the corresponding device configuration is shown in Figure 6a.The deliberate selection of gold (Au) electrodes as part of the device is related to the fact that the HOMO energy levels of the polymer, obtained through electrochemical measurements, match those of the Au electrodes.Owing to the poor solubility of the polymer in THF solvent and the volatility of THF itself, the prepared polymer films were not homogeneous.On the other hand, the solubility of the polymer in chlorobenzene was much better, approximately 10 mg/mL, Polymers 2023, 15, 4421 9 of 13 and therefore chlorobenzene was finally adopted.The saturation mobility (µ) is calculated as follows: where I DS and V GS are the source-drain current and gate voltage, respectively; W and L are the channel width and length, respectively; and Ci is the capacitance per unit area of the gate dielectric layer.Table 2 summarizes the hole mobility extracted from the transfer characteristic curve (Figure 6b).The average hole mobility based on ten sets of devices is in the vicinity of 0.356 cm 2 V −1 s −1 , which is excellent for P-type materials.The maximum hole mobility of the polymer is 0.383 cm 2 V −1 s −1 , which is achieved by modulation, including film thickness and annealing temperature.In addition to excellent mobility, the device shows low threshold voltage (Vth) and high switching ratio (Ion/Ioff), indicating excellent device performance [36] Morphological Analysis of PTVDPP-2FT Films We investigated the relationship between polymer film morphology and device performance using atomic force microscopy (AFM) images.Figures 7 and S11 show the AFM height and phase images of the annealed polymer film at optimal OTFT device performance conditions.We calculated the root-mean-square (RMS) roughness of the annealed film, and the surface roughness of PTVDPP-2FT was 3.12 nm.Combined with the AFM phase image, the polymer annealed films show good continuity without significant phase separation, which facilitates the achievement of high carrier mobility.Table 2 summarizes the hole mobility extracted from the transfer characteristic curve (Figure 6b).The average hole mobility based on ten sets of devices is in the vicinity of 0.356 cm 2 V −1 s −1 , which is excellent for P-type materials.The maximum hole mobility of the polymer is 0.383 cm 2 V −1 s −1 , which is achieved by modulation, including film thickness and annealing temperature.In addition to excellent mobility, the device shows low threshold voltage (V th ) and high switching ratio (I on /I off ), indicating excellent device performance [36].The output behaviors of the PTVDPP-2FT-based devices are shown in Figure 6c. Morphological Analysis of PTVDPP-2FT Films We investigated the relationship between polymer film morphology and device performance using atomic force microscopy (AFM) images.Figures 7 and S11 show the AFM height and phase images of the annealed polymer film at optimal OTFT device performance conditions.We calculated the root-mean-square (RMS) roughness of the annealed film, and the surface roughness of PTVDPP-2FT was 3.12 nm.Combined with the AFM phase image, the polymer annealed films show good continuity without significant phase separation, which facilitates the achievement of high carrier mobility. Grazing Incidence X-ray Diffraction of the Polymer Films We investigated the relationship between the crystallinity of polymer film and the performance of the OTFT device using 2D-GIWAXS tests.As can be seen in Figure 8a, the polymer shows fourth-order diffraction peaks, including (100), ( 200), (300), and (400), and the polymer film adopts an edge-on stacking mode.Substituting the angle corresponding to the (100) diffraction peak of the polymer PTVDPP-2FT into the Bragg's equation dsinθ = nλ, the d-d distance is calculated to be 27.14 Å.Based on the out-of-plane (010) diffraction peaks, the π−π stacking distance of the PTVDPP-2FT is estimated to be 3.52 Å (Figure 8b,c).The structure also uses FT as the monomer, and during its polymerization with DPP, we measured a stacking distance of approximately 3.95 Å [32].Most polymer systems based on the classical DPP structure tend to exhibit greater stacking distances, in the range of 3.7−4.0Å [37].The tight π−π stacking is beneficial for carrier transport and achieves high mobility of the polymer film [11]. Grazing Incidence X-ray Diffraction of the Polymer Films We investigated the relationship between the crystallinity of polymer film and the performance of the OTFT device using 2D-GIWAXS tests.As can be seen in Figure 8a, the polymer shows fourth-order diffraction peaks, including (100), ( 200), (300), and (400), and the polymer film adopts an edge-on stacking mode.Substituting the angle corresponding to the (100) diffraction peak of the polymer PTVDPP-2FT into the Bragg's equation dsinθ = nλ, the d-d distance is calculated to be 27.14 Å.Based on the out-of-plane (010) diffraction peaks, the π−π stacking distance of the PTVDPP-2FT is estimated to be 3.52 Å (Figure 8b,c).The structure also uses FT as the monomer, and during its polymerization with DPP, we measured a stacking distance of approximately 3.95 Å [32].Most polymer systems based on the classical DPP structure tend to exhibit greater stacking distances, in the range of 3.7−4.0Å [37].The tight π−π stacking is beneficial for carrier transport and achieves high mobility of the polymer film [11]. Grazing Incidence X-ray Diffraction of the Polymer Films We investigated the relationship between the crystallinity of polymer film and the performance of the OTFT device using 2D-GIWAXS tests.As can be seen in Figure 8a, the polymer shows fourth-order diffraction peaks, including (100), ( 200), (300), and (400), and the polymer film adopts an edge-on stacking mode.Substituting the angle corresponding to the (100) diffraction peak of the polymer PTVDPP-2FT into the Bragg's equation dsinθ = nλ, the d-d distance is calculated to be 27.14 Å.Based on the out-of-plane (010) diffraction peaks, the π−π stacking distance of the PTVDPP-2FT is estimated to be 3.52 Å (Figure 8b,c).The structure also uses FT as the monomer, and during its polymerization with DPP, we measured a stacking distance of approximately 3.95 Å [32].Most polymer systems based on the classical DPP structure tend to exhibit greater stacking distances, in the range of 3.7−4.0Å [37].The tight π−π stacking is beneficial for carrier transport and achieves high mobility of the polymer film [11]. Discussion To design better materials with excellent device properties, expanding the variety of molecular libraries is often straightforward and effective approach [12,38].A typical issue facing polymeric building block materials with donor-acceptor structures is the severe lack of available acceptor species [20].Some representative structures have been developed and show good carrier mobility [39][40][41].In order to further enrich the diversity of materials, the aim of this study was to break the classical single bond connecting DPP to thiophene and to introduce additional olefin.The route for the synthesis of the monomers is not complicated, and the chemical reagents used and the reaction conditions are not demanding.Polymer PTVDPP-2FT can be prepared by a coupling polymerization reaction with fluorinated thiophene and shows good properties in P-type materials.Next, more typical electron donor units can be applied to our materials, and polymers with a variety of structures can be formed.Not limited to P-type unipolar transport, the design and preparation of ambipolar materials as well as N-type materials with high electron mobility are also future fields of application for the system.We hope that the new molecules can be used as novel potential units for expanding the field of OTFT. Conclusions In this work, the synthetic route of monomer TVDPP based on a four-step preparation is explored and reported.Distinguishing from the classical structure DPP, the monomer features a longer conjugated structure and a more planar structure.Further, Sn-containing electron donor units can be conveniently introduced by Stille coupling polymerization to prepare macromolecules.The absence of chlorine-containing reagents in the entire polymer preparation process is favorable for the development of green polymerization methods.Polymer PTVDPP-2FT exhibits excellent thermodynamic stability and solubility.Results based on theoretical calculations show that it exhibits a particularly planar structure, which facilitates the transfer of holes through its main chain.The results based on GIWAXS revealed a tight stacking pattern between chains, which on the other hand facilitates the hopping transfer of carriers.Ultimately, OTFT devices based on this material show hole transfer rates up to 0.383 cm 2 V −1 s −1 and typical P-type transport characteristics.More materials employing this monomer as a core structural unit are being prepared and tested in our lab for use in optoelectronic devices. Polymers 2023 , 14 Figure 1 . Figure 1.Schematic representation of the new structure TVDPP inspired by the classical DPP molecular structure. Figure 1 . Figure 1.Schematic representation of the new structure TVDPP inspired by the classical DPP molecular structure. 14 Scheme 1 . Scheme 1. Preparation of the monomer and the synthetic process based on the palladium-catalyzed coupling polymerization reaction route, where alternating single and double bonds are shown in red. Scheme 1 . Scheme 1. Preparation of the monomer and the synthetic process based on the palladium-catalyzed coupling polymerization reaction route, where alternating single and double bonds are shown in red. 3. 2 . Density Functional Theory CalculationPolymers are difficult to culture into crystals; we used density functional theory (DFT) to study the theoretical configurations of the materials.The theoretical calculations regard- Polymers 2023, 15, x FOR PEER REVIEW 8 of 14 backbone of the molecule.LUMO is mainly located in the DPP unit of the molecular chain and shows a strong dispersion; therefore, it is more beneficial for the transport of hole carriers.According to theoretical calculations, the HOMO and LUMO energy levels of the dimer are −4.73 eV and −3.10 eV, respectively, corresponding to an energy gap of 1.63 eV. Figure 3 . Figure 3. Optimized conjugated backbone conformation of the methyl-substituted dimer of PTVDPP-2FT: (a) top view; (b) side view and frontier molecular orbital HOMO map; (c) LUMO map. Figure 3 . Figure 3. Optimized conjugated backbone conformation of the methyl-substituted dimer of PTVDPP-2FT: (a) top view; (b) side view and frontier molecular orbital HOMO map; (c) LUMO map. . The output behaviors of the PTVDPP-2FT-based devices are shown in Figure 6c. Table 1 . Molecular weight of the polymer and the ratio of the three elements it contains. 1The molecular chemical formula is C 62 H 90 F 2 N 2 O 2 S 3 . Table 1 . Molecular weight of the polymer and the ratio of the three elements it contains. The narrow bandgap is determined by the fact that it features long conjugated chains, which is favorable for the formation of materials with high carrier mobility.
7,229.6
2023-11-01T00:00:00.000
[ "Materials Science", "Engineering", "Chemistry", "Physics" ]
Bogdanov–Takens Bifurcation in a Shape Memory Alloy Oscillator with Delayed Feedback This work is focused on a shape memory alloy oscillator with delayed feedback. The main attention is to investigate the Bogdanov–Takens (B-T) bifurcation by choosing feedback parameters A 1,2 and time delay τ . The conditions for the occurrence of the B-T bifurcation are derived, and the versal unfolding of the norm forms near the B-T bifurcation point is obtained by using center manifold reduction and normal form. Moreover, it is demonstrated that the system also undergoes different codimension-1 bifurcations, such as saddle-node bifurcation, Hopf bifurcation, and saddle homoclinic bifurcation. Finally, some numerical simulations are given to verify the analytic results. Introduction In recent years, smart materials have been widely used in many fields such as aircraft manufacturing [1,2], control field [3], energy [4,5], and medical [6] due to their special properties. e discovery and application of shape memory alloys [7][8][9] is an important part of smart materials. e socalled shape memory alloy (SMA) [10] is a new type of smart material with special shape memory effect and pseudoelasticity, which can restore the previously defined shape when subjected to an appropriate thermomechanical loading process. SMA spring oscillators can exhibit rich dynamic behaviors based on their pseudo-elasticity, thus promoting the study of nonlinear dynamics and bifurcation of shape memory oscillators [11][12][13][14][15]. Savi et al. [16] studied the nonlinear dynamics of shape memory alloy systems and established the constitutive model of the SMA. Fu and Lu [17] investigated the nonlinear dynamics and vibration damping of dry friction oscillators with SMA restraints. Costa et al. [18] applied the extended time-delayed feedback approach to investigate the chaos control of an SMA two-bar truss. de Paula et al. [19] controlled a shape memory alloy two-bar truss by the delayed feedback method. e governing equation of motion of a shape memory oscillator [20,21] is given by where q � (qA/L), b � (bA/L 3 ), and e � (eA/L 5 ). m is the mass of the oscillator. F cos(ωt) is a periodic external force, and K(x, T) � q(T − T m )x − bx 3 + ex 5 is the restoring force of the spring. L and A, respectively, denote a shape memory element of length and cross-section area. b, e, c, and q are constants of the material. T M corresponds to the temperature where the martensitic phase is stable. In 2016, Yu et al. [22] considered a typical dimensionless system of the SMA oscillator based on equation (1) as follows: and they added a time-delayed feedback to control equation (2), and equation (2) can be rewritten as where x τ � x(t − τ), τ is denoted as delay, and A 1,2 is the delay position feedback parameter. If k cos(θt) is considered as the control parameter δ, equation (3) can be rewritten as ey used the normal form theory (NFT) and center manifold theorem (CMT) to calculate the conditions of the Hopf bifurcation and stability of equation (4). e deep insight of the system dynamics is helpful to understand the nonlinear dynamics of shape memory alloy systems. However, many studies on time-delay systems have focused on analyzing the bifurcations of codimension-1, such as Hopf bifurcation [23]. Actually, the time-delay system may have more complicated dynamics when two separate parameters or many parameters are changed simultaneously. B-T bifurcation, which is a typical codimension-2 bifurcation, is studied in [24][25][26][27][28]. Motivated by the above works, we consider system (4) and investigate the B-T bifurcation under some critical conditions. e main contributions of this paper are as follows: (1) e feedback parameters A 1,2 and time delay τ are selected to analyze their impact on codimension-2 bifurcations of system (4) (2) e bifurcation diagram and topological classification of the trajectory of a universal unfolding are given (3) e second-order terms of the normal form on a center manifold of the SMA system are obtained e layout of this work is organized as follows: in Section 2, we, respectively, give conditions for the occurrence of the B-T bifurcation and mainly discuss the normal forms for the B-T bifurcation. In Section 3, some numerical simulations are implemented to validate the above analysis. We give some conclusions in Section 4, respectively. Stability and B-T Bifurcation In this section, we mainly establish the existence of the B-T bifurcation under some critical conditions. Firstly, let _ x � y; then, system (4) can be equivalent to Denoting the equilibrium of system (4) as E 0 � (x 0 , 0), x 0 satisfies an algebraic equation as follows: where e 1 � α 4 , e 2 � − (α 3 + A 2 ), e 3 � α 2 − A 1 , and e 4 � − δ. Next, we discuss the existence conditions of the root of equation (6). Let x � x − x 0 and y � y. Omitting the tilde, then system (5) can be rewritten as 2 Complexity where e characteristic equation of system (7) at the zero Next, we give the conditions for the existence of the B-T bifurcation and investigate the dynamical classification near the B-T bifurcation point. □ Lemma 2. If d 0 < 0, then the following is obtained: all the roots of equation (9) have negative real parts except for the zero roots Proof. Clearly, F(0, τ) � d 1 + c 1 � 0. By calculating, we can obtain the following result: It is easy to obtain if λ � 0 and τ ≠ τ 0 , then (9) has roots λ 1 � 0 and λ 2 � − α 1 < 0. When τ ≠ 0, let λ � iω(ω > 0) be a root of equation (9); then, we have Let ω 2 � t > 0; then, equation (12) can be rewritten as where p � α 2 1 − 2d 0 . If d 0 < 0, it results in p > 0. Clearly, equation (13) has no positive roots. us, (iii) holds. is completes the proof. Next, we will investigate the B-T bifurcation of system (7) near (d 0 , τ 0 ) by choosing d 1 and τas bifurcation parameters. Let X � ΦZ + W, where Z ∈ R 2 and W ∈ B, namely, From [30,31], system (14) can be written as From [26], we can obtain the following result: where e following normal form with versal unfolding on the center manifold can be obtained by some calculations: where 1)). e detailed calculations can be found in Appendix. Numerical Simulation In this section, we use the dde23 method in MATLAB and show some numerical simulations to illustrate the analysis results given in the previous sections. In order to easily verify the obtained results, we choose parameters Figure 1, the bifurcation diagrams of system (16) are composed of codimension-2 bifurcation point (v 1 , v 2 ) � (0, 0) and three codimension-1 curves (saddle-node bifurcation curve, Hopf bifurcation curve, and saddle homoclinic bifurcation curve). When the parameters v 1 and v 2 change in different regions, system (16) will produce different dynamic properties. To easily analyze the dynamics of system (7) Figure 1) Conclusions In this work, a shape memory alloy oscillator with delayed feedback has been analyzed. We mainly choose the two parameters d 1 � A 1 + 3x 2 0 A 2 and τ to investigate the B-T bifurcation of system (6). It is demonstrated that the feedback parameters A 1,2 and time delay τ have an important influence on the shape memory alloy oscillator. As the two parameters of the SMA oscillator change, the conditions for the occurrence of B-T bifurcation and some phase portraits and bifurcation diagrams are given. By using the CMT and NFT of functional differential equations, we investigate some typical codimension-1 bifurcations such as saddle-node bifurcation, Hopf bifurcation, and saddle homoclinic bifurcation. Some numerical simulations further verify the obtained analytic results. In our paper, second-order terms of the normal form on a center manifold are given, but the higher order is not investigated. System (2) or (3) is only discussed by considering k cos(θt) as the control parameter δ (see [22]). However, the periodic force k cos(θt) has an important effect on the vibration and memory characteristics of the SMA system. erefore, further discussion and analysis of the SMA system will be our future work.
1,954.4
2020-10-24T00:00:00.000
[ "Engineering", "Physics" ]
Towards the Internet of Augmented Things: An Open-source Framework to Interconnect IoT Devices and Augmented Reality Systems † : The latest Augmented Reality (AR) and Mixed Reality (MR) systems are able to provide innovative methods for user interaction, but their full potential can only be achieved when they are able to exchange bidirectional information with the physical world that surround them, including the objects that belong to the Internet of Things (IoT). The problem is that elements like AR display devices or IoT sensors/actuators often use heterogeneous technologies that make it difficult to intercommunicate them in an easy way, thus requiring a high degree of specialization to carry out such a task. This paper presents an open-source framework that eases the integration of AR and IoT devices as well as the transfer of information among them, both in real time and in a dynamic way. The proposed framework makes use of widely used standard protocols and open-source tools like MQTT, HTTPS or Node-RED. In order to illustrate the operation of the framework, this paper presents the implementation of a practical home automation example: an AR/MR application for energy consumption monitoring that allows for using a pair of Microsoft HoloLens smart glasses to interact with smart power outlets. Introduction Internet of Things (IoT) devices have been already deployed in a relevant amount of practical solutions for different sectors and smart applications [1] and, due to their success, their number is expected to grow significantly in the next years. IoT applications, when coupled with advances in Augmented Reality (AR) and Mixed Reality (MR), have the potential to bring sensing, communication and interaction to a whole new level. Although AR/MR research was initiated in the 60s, initial prototypes lacked certain features required for being functional on a massive scale. Nevertheless, in the past years, AR/MR capabilities have been enhanced remarkably thanks to improvements in the underlying computing technologies, electronics and connectivity. In particular, the adoption of AR in the industrial sector has increased significantly due to the reduction of its commercialization price, which lead to the concept of Industrial Augmented Reality (IAR) under the Industry 4.0 paradigm [2]. One of the biggest problems faced by the developers of AR applications that interact with real-world elements (e.g., physical things) is the wide range of technologies that need to be managed to perform even the simplest interactions. This is because the most popular AR frameworks use very different development paradigms from those used by IoT devices. In addition, the professional profile of experts working in both sectors tends to be very different. For an AR application to interact with the physical world, both the AR and the IoT device need to be able to use a shared communication mechanism and to exchange messages using the same language in a way that they can understand each other. This is not straightforward, since some constraints may arise, like the heterogeneity and resource limitations of IoT hardware devices and the development restrictions often imposed by AR frameworks. Recently, different researchers have addressed the compatibility issues associated with the diversity of protocols, technologies and standards that exist in the IoT field [3]. However, although AR can be appointed as an attractive, convenient and complementary interface for IoT, only a few works focused on the IoT-AR interaction and none of them proposes an IoT-AR framework designed from scratch. For instance, Jo et al. [4] proposed a method in which the preferred AR tracking method is determined by the IoT device. Moreover, the same authors published recently a thorough state-of-the-art review that emphasizes open issues to enhance the IoT-AR integration [5]: distributed and object-centric data management and visualization; access, control and interaction mechanisms with the objects; and seamless interaction and content exchange interoperability. Furthermore, the researchers developed an IoT-AR proof-of-concept prototype for enhanced shopping experiences, but only a few specific details of the IoT-AR integration framework have been provided [6]. To solve the aforementioned issues and to create a ubiquitous, scalable, flexible, interoperable, open-source and easily configurable IoT-AR framework, this article presents an open-source framework that enables the integration of AR systems and IoT devices through the use of standard communication protocols. Figure 1 shows the communications architecture of the proposed IoT-AR framework, which is composed of three layers: • An AR device layer, which is composed of the different AR devices (e.g., smart glasses, smartphones, tablets). • An IoT-AR layer. It is the most relevant layer and is based on Node-RED [7], which is responsible for interconnecting the different elements of the system. Node-RED is an integration tool based on visual programming and implemented in Node.js. It enables the manipulation of data flows between different components in a very simple way. Node-RED is in charge of translating all the protocols involved in the requests and messages exchanged by the system. In addition, Node-RED allows data flows to change dynamically very easily according to the requirements of the system. Thus, thanks to a structure of nodes and links between them, Node-RED enables modifying system parameters simply by changing connections between two nodes, without the need for adapting the implementation of any part of the system, which adds flexibility and reduces the probability of coding errors. Furthermore, Node-RED also allows for interconnecting data flows of different protocols in a simple and intuitive way without having to know the specific details of each protocol. Specifically, these advantages are used to interact with the IoT Ecosystem (with the help of the Message Queuing Telemetry Transport (MQTT) protocol [8]) and with the AR Devices (through HTTP exchanges). • The IoT ecosystem layer includes the different IoT devices and their communications protocols. Although the framework could make use of different messaging protocols to communicate Node-RED and the IoT nodes [9], MQTT was chosen because it is widely used for heterogeneous IoT networks, it is lightweight and it makes use of persistent connections to exchange messages among nodes in a seamless way, even if they are located behind a network with Network Address Translation (NAT) or protected with a firewall that prevents incoming connections. External APIs Auxiliar services Implementation The design described in the previous section was implemented and the developed systems are available along with implementation examples in an Open-Source repository on Github [10]. The main details on such an implementation are given in the next subsections. AR Device Layer Currently, the most popular AR devices are smartphones and tablets due to their processing capacity and low cost. This means that AR applications have been usually developed for Android and iOS. However, in the last years, numerous AR specific devices such as smart glasses and Head-Mounted Displays (HMDs) are emerging, so there are multiple alternatives for choosing among hardware devices, supported platforms and software development tools [11]. Microsoft HoloLens smart glasses [12] are currently the commercial device that offers the best AR and MR experience, mainly due to two factors that differentiate them from their competitors: their specific acceleration hardware and their software platform. Unlike other AR/MR devices, Microsoft HoloLens offers a software platform that consists of a Windows operating system completely adapted to AR. This software platform provides developers with tools that greatly ease their work. Nonetheless, low-level actions such as video processing or working with sockets can be difficult as it is not directly supported by the Software Development Kit (SDK), and require specific knowledge. Nevertheless, HTTP communications are provided by the SDK and can be implemented in an easy way. The AR SDK from HoloLens is composed of different modules (the most relevant ones are depicted in Figure 2) that interact with the hardware to perform all the necessary tasks to provide a good AR experience to the user. Specifically, the Gaze Manager and Gaze Stabilizer monitor the position of the users' head and the point they are looking at. The Spatial Mapping is in charge of maintaining and updating a 3D scanned model of the environment that surrounds the user so that the virtual elements can interact with it. Finally, when the users make a gesture with their hand, the Gesture Manager module recognizes it and triggers an action related to one of the handler modules. If an action requires communication with the real world, the Service module of the IoT-AR framework is called. Such a module generates an HTTP request that is sent asynchronously to Node-RED and, if it is a query, it waits for the answer and notifies the application by means of a callback. The Service Module is completely asynchronous to avoid blocking the main interface thread. This was achieved by developing a set of listeners: the elements that consume data from the service API need to subscribe to them in order to receive notifications on the available data. IoT-AR layer This layer is composed of an MQTT server, the core of Node-RED and its API REST. It is in charge of routing the messages so that each IoT device is able to receive and send information from and to the appropriate AR device. When an HTTP request arrives to Node-RED from the AR application, it is decoded, and it is decided the IoT device must be forwarded by using a specific header that contains a session token. Then, the request is routed to the corresponding IoT device via a MQTT message that is published on a specific topic. In the same way, when an IoT device sends information using MQTT, Node-RED saves such an information and updates the AR device that request it when it asks for updates. IoT Ecosystem One of the major problems of IoT devices is their limitations in terms of computational power, as well as their usual dependence on batteries. For such reasons, their communications protocols should be lightweight, which enables optimizing the use of resources (e.g., to minimize computational load and storage needs) and energy consumption, as well as to reduce the communications overhead related to the lower layers of the communications stack [13]. In addition, when IoT devices have to receive remote requests, they must be provided with a mechanism that allows them to respond to events fast without needing to expose their ports on the Internet, even when they may be behind a firewall. Furthermore, the use of standard protocols can be helpful, since they provide scalability, interoperability and functionality to IoT heterogeneous systems. Therefore, the used lightweight protocols have to be flexible, provide support for plug-and-play mechanisms and be compatible with already existing protocols to be implemented into different types of devices (e.g., beacons, wearables, Programmable Logic Controllers (PLCs), gateways or smart garments [14]). To comply with the above-mentioned characteristics, MQTT was selected for exchanging messages. Such a protocol is also supported by the most common open-source firmwares. Each IoT device subscribes to an MQTT topic and listens permanently to it, waiting for the arrival of new requests. Moreover, an IoT device can initiate the communication to notify status changes or alerts. AR Energy Monitoring and Control: A Practical Use Case In order to verify the proposed IoT-AR framework, a use case for energy consumption monitoring was devised, where Microsoft HoloLens smart glasses are able to interact with smart power outlets. Specifically, the developed system is able to show the real-time information obtained from a current sensor connected to a Sonoff Pow socket [15]. In addition, users can interact directly with the power socket through the smart glasses: commands can be sent from the glasses to open or to close an electrical switch connected to the socket. The dashboard of the developed application is shown in Figure 3. Such a dashboard can be moved by the user, who can place it in any position of the real world. Once the dashboard is placed, the user can interact with the buttons by looking at them and by carrying out predefined HoloLens hand gestures. Such gestures are interpreted by the gesture recognizer module integrated in the AR SDK and are passed to the service module of the framework that is in charge of transmitting the data over the network so that IoT devices can receive commands and perform the actions that the user requests. Conclusions AR is a powerful technology that enables user interaction in different areas, whose use is expected to grow significantly in the next years. However, its full potential can only be reached if applications are able to interact with the real world in real-time and as seamlessly for the user as possible. This paper has presented an open-source framework that eases the implementation of communication mechanisms between IoT devices and AR/MR applications. After detailing the proposed design and implementation, the framework utility was demonstrated by developing an AR/MR application for Microsoft HoloLens that enables two-way real time interaction with an intelligent power socket. Although the presented framework was applied to an energy monitoring solution, it can be easily adapted to a broad range of IoT devices and to other AR/MR-based end-user applications (e.g., preventive maintenance, augmented communication, context-aware applications or enhance localization, among others). As a consequence, further work will be focused on upgrading the IoT-AR framework design and performing additional experiments in terms of usability and performance in industrial scenarios. Conflicts of Interest: The authors declare no conflict of interest.
3,070.2
2019-11-14T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Hodge-Deligne polynomials of character varieties of abelian groups Let F be a finite group and X be a complex quasi-projective F-variety. For r in N, we consider the mixed Hodge-Deligne polynomials of quotients X^r/F, where F acts diagonally, and compute them for certain classes of varieties X with simple mixed Hodge structures. A particularly interesting case is when X is the maximal torus of an affine reductive group G, and F is its Weyl group. As an application, we obtain explicit formulae for the Hodge-Deligne and E-polynomials of (the distinguished component of) G-character varieties of free abelian groups. In the cases G=GL(n,C) and SL(n,C) we get even more concrete expressions for these polynomials, using the combinatorics of partitions. Introduction The study of the geometry, topology and arithmetic of character varieties is an important topic of contemporary research. Given a reductive complex algebraic group G, and a finitely presented group Γ, the G-character variety of Γ is the (affine) geometric invariant theory (GIT) quotient When the group Γ is the fundamental group of a Riemann surface (or more generally, a Kähler group), these spaces are homeomorphic to moduli spaces of G-Higgs bundles via the non-abelian Hodge correspondence (see, e.g. [1,2]) and have found interesting connections to important problems in Mathematical Physics in the context of mirror symmetry and the geometric Langlands correspondence. Recently, some interesting formulas were obtained by Hausel, Letellier and Rodriguez-Villegas for the so-called E-polynomial of smooth GL n, ( )-character varieties of surface groups, by applying arithmetic harmonic analysis to their -models and proving these are polynomial count [3,4]. By computing indecomposable bundles on algebraic curves over finite fields, Schiffmann determined the Poincaré polynomial of the moduli spaces of stable Higgs bundles, hence of the corresponding GL n, ( )-character varieties of surface groups [5]. Other methods based on point counting were employed by Mereb [6] (the SL n, ( ) case) and (the singular, small n case). Moreover, geometric tools were developed by Lawton, Logares, Muñoz and Newstead to calculate the E-polynomials using stratifications of character varieties (over ) of surface groups, exploring directly the additivity of these polynomials [8,9]. This led to the development of a Topological Quantum Field Theory for character varieties by González-Prieto et al. [10,11]. In the present article, we deal instead with G-character varieties of free abelian groups, and with the determination of their mixed Hodge structures (MHSs) for a general complex reductive G. In particular, we explicitly compute the mixed Hodge polynomials of these varieties. The mixed Hodge polynomial μ X is a three variable polynomial μ μ t u v , , X X = ( ) defined for any (complex) quasi-projective variety X and encodes all numerical information about the MHS on the cohomology of X, generalizing both the Poincaré and the E-polynomials. To present our main results, denote the G-character variety of the free abelian group Γ r ≅ , r ∈ , by: where // stands for the (affine) GIT quotient (see, e.g., [12,13]) for the natural G-action, by conjugation, on the space of representations Hom G , ). This later space consists of pairwise commuting r-tuples of elements of G and is of relevance in Mathematical Physics, namely, in the context of supersymmetric Yang-Mills theory [14]. When r is even, r is also a Kähler group (the fundamental group of a Kähler manifold) and the smooth locus of G m 2 ( ) is diffeomorphic to a certain moduli space of G-Higgs bundles over a m-dimensional abelian variety (see, for instance [15]). The topology and geometry of character varieties of free abelian groups have been studied by Florentino-Lawton, Sikora, Ramras-Stafa, among others (see, e.g., [16][17][18][19]). It is known that the affine algebraic variety G r is not in general irreducible, but the irreducible component of the trivial r -representation, denoted G r 0 , has a normalization G r ⋆ isomorphic to T W r / ([17, Theorem 2.1]), where T G ⊂ is a maximal torus and W is the Weyl group, acting diagonally on T r (hence also on its cohomology). Thus, the varieties G r ⋆ are singular orbifolds of dimension r T dim with a special kind of MHSs, called balanced or of Hodge-Tate type and they satisfy the analogue of Poincaré duality for MHS. When r 2 = , Thaddeus proved that G 2 ⋆ are of crucial importance in mirror symmetry and Langlands duality and computed their orbifold E-polynomials [20]. Here, we obtain the following explicit formula for mixed Hodge polynomials of G r ⋆ . where A g is the automorphism induced on H T, 1 ( ) by g W ∈ , and I is the identity automorphism. One consequence of this result is a formula for the (compactly supported) E-polynomial of the irreducible component G G r r 0 ⊂ , for every such G (Theorem 5.4). Our approach to Theorem 1.1 is based on working with equivariant MHSs and their corresponding equivariant polynomials, defined for varieties with an action of a finite group, and focusing on certain classes of balanced varieties. In particular, we generalize to the context of the equivariant E-polynomial, some of the techniques introduced in [21] for dealing with equivariant weight polynomials. For the groups G GL n, = ( ) and SL n, ( ), we have that G r is an irreducible normal variety, and the formula in Theorem 1.1 can be made even more concrete, in terms of partitions of n, and allows explicit computations of the Hodge-Deligne, Eand Poincaré polynomials of the corresponding character varieties G G r r = ⋆ . We state the main results below in the compactly supported version, the one which is relevant in arithmetic geometry (see [3], Appendix). Let n denote the set of partitions of n ∈ . By n n 1 2 a a a n n where n n 1 2 a a a n n 1 2 Theorem 1.2 generalizes, to every r n , 1 ≥ , some formulas recently obtained in [7,9] (the cases n 2 = and n 3 = ) by different methods, which are only tractable for low values of n: the approach in [8,9] uses stratifications and fibrations to compute E-polynomials of character varieties of free groups respectively, surface groups; the computations in [7] apply representation theory of finite groups and point counting of varieties over finite fields. By substituting x 1 = in E G c r , we obtain the Euler characteristics of these moduli spaces. Moreover, by showing that G r have very special MHSs (that we call round, see Definition 3.7), Theorems 1.1 and 1.2 immediately provide explicit formulas for their mixed and Poincaré polynomials (Theorem 5.13). The GL n, ( ) case is particularly symmetric, as the generating function of mixed Hodge polynomials gives precisely the formula of J. Cheah [22] for the mixed Hodge numbers of symmetric products. On the other hand, by examining the action of W on the cohomology of a maximal torus, our methods allow for the computation of μ G r for all the classical complex semisimple groups G. These will be addressed in upcoming work. We now outline the contents of the article. In Section 2, we review necessary background on MHS, quasi-projective varieties, etc., and define the relevant polynomials, providing examples and focusing on balanced varieties. In Section 3, we study properties of special MHS, related to notions defined in [21], and pay special attention to round varieties, for which the knowledge of either the Poincaré polynomial or the E-polynomial allows the determination of μ. Section 4 is devoted to equivariant MHS, character formulas and the cohomology of finite quotients. Finally, in Section 5 we prove our main theorem and provide explicit calculations of Hodge-Deligne and E-polynomials (and Euler characteristics) of character varieties of r , in particular for GL n, ( ) and SL n, ( ); in the GL n, ( ) case, the computations are related to MHS on symmetric products, thereby obtaining a curious combinatorial identity. In the Appendix, we present a proof, based on [21], of the equivariant version of a theorem in [8,9] on the multiplicative property of the E-polynomial for fibrations. A preliminary version of the main results has been announced in [23]. Preliminaries on character varieties and on MHSs We start by recalling the relevant definitions and properties of character varieties and of mixed Hodge structures (MHSs) on quasi-projective varieties, which serves to fix terminology and notation. Character varieties Given a finitely generated group Γ and a complex affine reductive group G, the G-character variety of Γ is defined to be the (affine) GIT quotient (see [12,13]; [24] for topological aspects): Note that Hom G Γ, ( ), the space of homomorphisms ρ G : Γ → , is an affine variety, as Γ is defined by algebraic relations, and it is also a G-variety when considering the action of G by conjugation on Hom G Γ, ( ). The aforementioned GIT quotient is the maximal spectrum of the ring Hom G Γ, Hom The GIT quotient does not parametrize all orbits, since some of them may not be distinguishable by invariant functions. In fact, it can be shown (see, e.g., [13]) that the conjugation orbits of two representations ρ ρ G , : Γ ′ → define the same point in Hom G G Γ, ( )/ / if and only if their closures intersect: G ρ ⋅ ∩ G ρ ⋅ ′ ≠ ∅ (in either the Zariski or the complex topology coming from an embedding Hom G Γ, N ( )↪ ). For detailed definitions and properties of general character varieties, we refer to [16,25]. In this article, we will be mostly concerned with the case when Γ is a finitely generated free abelian group, Γ r = for some natural number r, the rank of Γ. The corresponding G-character varieties: ) is of central importance in determining the so-called moduli space of vacua of supersymmetric gauge theories on a r-dimensional torus, as studied in [14,26] and others. MHSs On a compact Kähler manifold X the complex cohomology satisfies the Hodge decomposition H X, ) for all p q k 1 + = + . This notion can be generalized to quasi-projective algebraic varieties X over , possibly non-smooth and/or non-compact. Namely, the complex cohomology of any such variety is also endowed with a natural filtration, the Hodge filtration F, and moreover, there is a special second increasing filtration on the rational cohomology: (1) The specialization of μ X for u v 1 = = gives the Poincaré polynomial of X: ) being the Betti numbers of X. Note that the coefficients of μ X and of P X are non-negative integers, whereas E X lives in the ring u v , [ ]. (2) As mentioned earlier, there is an entirely parallel theory for the compactly supported cohomology. Here, the associated Hodge numbers are denoted by h H X dim c k p q c k p q , , , , ≔ ( ). If X stands for one of the polynomials in the aforementioned definition, we will distinguish its compactly supported version by writing X c . (3) Comment on terminology: there are inconsistencies in the literature on the terminology used for these polynomials. Since h X k p q , , ( ) are generally called Hodge-Deligne (or mixed Hodge) numbers, we refer to μ X as Hodge-Deligne or mixed Hodge polynomial. To emphasize the distinction, the compactly supported E-polynomial E X c will also be called the Serre polynomial of X, since its crucial behavior, as a generalized Euler characteristic, was first used by Serre in connection with the Weil conjectures (see [30]). (4) Many specializations of the E-polynomial have been studied in the literature. There is, for example, the weight polynomial W y w X y 1 [21]). This is a specialization of the E-polynomial since W y E y y , X X ( ) = ( ). Also, Hirzebruch's χ y -genus and the signature σ of a complex manifold X are given, in terms of E u v , X ( ), as: respectively (see Hirzebruch [31]). We now collect some well-known important properties of these polynomials, for later use. Proposition 2.4. For a quasi-projective variety X, we have: (1) The polynomials μ X and E X are symmetric in the variables u and v; in particular, if h X 0 c is additive for stratifications of X by locally closed subsets, and its degree is equal to X 2 dim . (5) All polynomials μ X , P X and E X are multiplicative under Cartesian products. 1(4). □ A common feature of the varieties in this paper is that their MHS is "diagonal:" for each k, the only nonzero mixed Hodge numbers are h k p q , , with p q = . Definition 2.5. A quasi-projective variety X is said to be balanced or of Hodge-Tate type if for every nonnegative integer k 0 ∈ , and all p q Example 2.6. (1) If X is connected, H X, 0 ( ) ≅ has always a pure Hodge structure, with trivial decomposition H X, 0 ( ) = H X 0,0,0 ( ). Dually, when X is also smooth, the compactly supported cohomology is also a trivial decom- (4) Consider the total space X of the trivial line bundle over an elliptic curve X Λ ≅ ( / ) × , where Λ is a rank two lattice in , ( +). It is easy to see that X is real analytically isomorphic to 2 ( ) * (but not complex analytically or algebraically isomorphic). From the Künneth isomorphism and considerations analogous to Example 2.6, we get: (1) The last example is a very special case (the genus 1, rank 1 case) of the non-abelian Hodge correspondence mentioned in Section 1, which produces diffeomorphisms between (Zariski open subsets of) moduli spaces of flat connections and certain moduli spaces of Higgs bundles over a given Riemann surface. The fact that one diffeomorphism type is balanced (the flat connection side of the correspondence) and the other is pure is a general feature (see [3,4]). (2) If X is balanced, its E-polynomial depends only on the product uv, so it is common to adopt the change of variables x uv ≡ . When written in this variable, Separably pure, elementary and round varieties In this section, we collect many properties of MHS that are necessary later on. We also describe the types of Hodge structures that allow the recovery of the mixed Hodge polynomial given the Eor the Poincaré polynomial (Theorem 3.6), and concentrate on the case of round varieties, which are the Hodge types of our character varieties. We tried to be self-contained for the benefit of researchers in the field of character varieties or Higgs bundles that may not be familiar with MHS. Elementary and separably pure varieties The MHSs on the cohomology of a given quasi-projective variety X may be trivial, i.e., the decomposition of every H X, k ( ) is the trivial one, and many such examples are considered here. When this happens, the only non-zero h X k p q , , ( ) satisfy q p = (by Proposition 2.4(1)) and much of what can be said about the cohomology can be transported to MHSs. Adapting some notions from [21] (who worked with the weight polynomial), we introduce the following terminology. Definition 3.1. Let X be a quasi-projective variety. X (or its cohomology) is called elementary if its MHSs are trivial decompositions of the cohomology, so that for every k ∈ there is only one p ∈ such that h X 0 . X is said to be separably pure if the MHS on each H X, k ( ) is in fact pure of total weight w k , and such that w w j k ≠ for every j k ≠ . In this case, A general weight function is not enough to recover μ X from the weight or the E-polynomials (different degrees of cohomology may have equal total weights). However, this can be done (see Theorem 3.6) if the weight function k p k ↦ is injective, in which case the equality (2) takes the stronger form: (2) In a pure Hodge structure of total weight k on H X, k ( ) the only non-zero weight summand is Gr H X, ). So, a pure total cohomology is separably pure, but not conversely, as the case * shows (Example 2.6). (3) When X is separably pure, instead of the weight function, one can define a degree function p q , Noting that, in fact, the degree k only depends on the total weight p q + (being separably pure) we can write this as p q k , In this article, most varieties are both separably pure and balanced, and an alternative characterization follows. Lemma 3.3. A quasi-projective variety X is separably pure and balanced if and only if it is elementary and its weight function k p k ↦ is injective. Proof. If X is separably pure, the total weight in each H X, k ( ) has to be constant. But if X is also balanced, given k, all h X k p q , , ( ) vanish except for a unique pair p q p p , , k k ( ) = ( ), so we have an assignment k p k ↦ proving that X is elementary. Moreover, since the total weights are different for distinct k, the weight function is injective. The converse statement is easy since an elementary variety is Hodge-Tate and an injective weight function implies injectivity for total weights. □ (writing x uv = , see Remark 2.7(2)). So, GL 3, ( ) is elementary (hence balanced) but not separably pure: both degrees 4 and 5 have associated total weight 6 (the terms with x 3 ), so GL n, ( ) is not separably pure, for n 3 ≥ . Moreover, the same argument readily shows that GL n, ( ) is not elementary for n 5 ≥ . The aforementioned examples show that this "yoga of weights," as alluded by Grothendieck, is very useful in understanding general properties of certain classes of varieties. When we know that a particular variety X has a degree or a weight function as above, we can determine the full collection of triples k p q , , In Figure 1, the shaded area illustrates Lemma 3.3; for the definition of round, see Section 3.2. The next result shows that elementary and separably pure are indeed the correct notions to be able to determine the mixed Hodge polynomial from the Poincaré or the E-polynomial, respectively. Theorem 3.6. Let X be a quasi-projective variety of dimension n. Then: (1) If X is elementary, with known weight function, its Poincaré polynomial determines its Hodge-Deligne polynomial. (2) If X is separably pure, with known degree function, its E-polynomial determines its Hodge-Deligne polynomial. Proof. (1) Suppose the Poincaré polynomial of X is P t b t , and the degree function as p q k , p q ( ) ↦ + , since the total weights are in one-to-one correspondence with the degrees of cohomology, we obtain μ t u v , , Round varieties From Theorem 3.6, if a variety X is both balanced and separably pure, then μ X can be recovered from either E X or P X , knowing their degree/weight functions. A specially interesting case is the following. Definition 3.7. Let X be a quasi-projective variety. If the only non-zero Hodge numbers are of type h X k k k , , ( ), k X 0, , 2 dim ∈ { … }, we say that X is round. In other words, a round variety is both elementary and separably pure and its only k-weights have the form k k , ( ). Round varieties are referred to as "minimally pure" balanced varieties in Dimca-Lehrer (see [21, (1) The Hodge-Deligne polynomial of X reduces to a one-variable polynomial, and can be reconstructed from either the E or the Poincaré polynomial: , . (2) The Cartesian product X Y × is round. Proof. (1) If X satisfies Poincaré duality on MHS, and X n dim = , one has If X is additionally round, analogously to Proposition 3.9, μ X c can be reconstructed from P X c and E X c as: , . (2) A sufficient condition for roundness is the following: if X is balanced and separably pure and its cohomology has no gaps, in the sense that for every k ∈ , the condition H X, , then X is round. This is easy to see from Lemma 3.3 and the restrictions on weights (Proposition 2.4(2)). Cohomology and MHSs for finite quotients Let F be a finite group and X a complex quasi-projective F-variety. In this section, we outline some results on the cohomology and MHSs of quotients of the form X F r / , where F acts diagonally on the Cartesian product X r , for general r 1 ≥ . Of special relevance is a formula, in Corollary 4.8, for the Hodge-Deligne polynomial of X F r / for an elementary variety X whose cohomology is a simple exterior algebra. Equivariant MHSs The MHS of the ordinary quotient X F / is related to the one of X and its F-action, as follows. Since F acts algebraically on X, it induces an action on its cohomology ring preserving the degrees, and, by Proposition 2. , , . (1) X is obtained by replacing each representation in X F by its dimension; (2) The Künneth formula and Poincaré duality, for X smooth, are compatible with equivariant MHS: where ⊗ means that we take tensor products of graded F -representations. Proof. (1) This follows immediately from the definition of dimension of representation. For (2), it suffices to see that the Künneth and Poincaré maps are also morphisms in the category of F -modules, which is easily checked. □ Cohomology of finite quotients We recall some known facts concerning the usual and the compactly supported cohomology of the quotient X F / . Consider its equivariant cohomology, defined on rational cohomology by where EF is the universal principal bundle over BF, the classifying space of F, and EF X F × is the quotient under the natural action, which admits an algebraic map EF X X F F π × ⟶ / . Since F is finite, so is the stabilizer of any point for the F action, and the Vietoris-Begle theorem (see e.g. [ Proof. Assume first that F acts freely on X. Then, X F / has a well-defined manifold structure, and one can realize the pullback in cohomology by the pullback in differential forms. In particular, this shows that the image of the pullback π H X F H X : , , ( / ) → ( ) * * * is given by H X, F ( ) * . Using (5), this means that the pullback map is bijective onto H X, F ( ) * . If F does not act freely, the same argument can be reproduced for the de Rham orbifold cohomology, in which representatives of orbifold cohomology classes are sections of exterior powers of the orbifold cotangent bundle (see [35]). The result then follows because, for manifolds such as X, the de Rham orbifold cohomology reduces to the usual de Rham cohomology. □ The isomorphism of (5) can be obtained as the pullback of the algebraic map π X X F : → / . Given that pullbacks of algebraic maps preserve MHSs, we see that this isomorphism respects MHS (see also [8]) , . Moreover, since orbifolds satisfy Poincaré duality (see Satake [36], where these are called V -manifolds), this isomorphism is also valid for the compactly supported cohomology. Proof. Given equation (6) In general, if we denote the character of an F-module V by χ V , because of the properties of these with respect to direct sums, we have: , , where μ t u v , , X F ( ) is viewed as an F-module, and equivalently as a direct sum of modules graded according to the triples k p q , , ( ) . Let F | | be the cardinality of F. is a decomposition of V into irreducible sub-representations, then by the Schur orthogonality relations, the coefficient of the trivial one-dimensional representation 1 is given by: Applying this to V μ t u v , , X F = ( ) gives, in view of Corollary 4.4: and the wanted formula follows from equation (7). □ . An interesting application of Theorem 4.6 is when the cohomology of X is an exterior algebra. To be precise, we say that H X, ( ) * is an exterior algebra of odd degree k 0 if: and all other cohomology groups are zero. ≤ . Then, for r 0 > and the diagonal action of F on X r : where A g is the automorphism of H X, k ( ) corresponding to g F ∈ , and I is the identity automorphism. In particular, if X is round: Proof. First, let r 1 = . Since X is elementary and tensor and exterior products preserve MHSs, we get for all l 0 ≥ , Applying Theorem 4.6 to this case, using x uv = , we get 1 . Now, for a general F-module V , with g F ∈ acting as V Aut V g ∈ ( ), we have: = . Now, for a general r 1 ≥ , it follows from Proposition 4.2(2) that for the diagonal action μ μ Finally, the round case follows by setting p k 0 0 = . □ Abelian character varieties and their Hodge-Deligne polynomials In this section, we apply the previous formulas to the computation of the Hodge-Deligne, Poincaré and E-polynomials, of the distinguished irreducible component of some families of character varieties. The important case of GL n, ( )-character varieties leads to the action of the symmetric group on a torus and is naturally related to work of I. G. Macdonald [38] and of J. Cheah [22] on symmetric products. Mixed Hodge polynomials of abelian character varieties As in Section 2.1, let G be a connected complex affine reductive group. For simplicity, the G-character variety of Γ r = , a rank r free abelian group, will be denoted by In general, the varieties G r (as well as Hom G , r ( )) are not irreducible. But there is a unique irreducible subvariety containing the identity representation that we call the distinguished component and denote by G r 0 , which is constructed as the image under the composition where π is the GIT projection, and T is a fixed maximal torus of G. This image, is then a closed subvariety of G r (see [20]) that we call the distinguished component. Let W be the Weyl group of G, acting by conjugation on T . We quote the following result from [17]. As in Section 1, denote by (11) where A g is the automorphism of H T, 1 ( ) given by g W ∈ , and I is the identity. Proof. Since Cartesian products of round varieties are round, and the maximal torus of G is isomorphic to n ( ) * for some n, T is a round variety and has an algebraic action of W . Then W also acts diagonally on T r n r = ( ) * , so T W r / is also round by Corollary 4.5. Moreover, the cohomology of T is an exterior algebra of degree k 1 0 = , so Corollary 4.8 immediately gives the desired formula for T W r / . The theorem follows from the isomorphism G T W (1), we obtain, in the compactly supported case: where T dim is the rank of G. We also obtain a formula for the Poincaré and for the Serre polynomial E G ) coincides with the one used in [19], in the context of compact Lie groups.³ As indicated in Proposition 2.4(4), the Serre polynomial (E c -polynomial) is additive for disjoint unions of locally closed subvarieties. Therefore, for every bijective normalization morphism between algebraic varieties f X Y : → the E c -polynomials of X and of Y coincide. In particular, the E c -polynomials of G Theorem 5.8. Let r 1 ≥ , and let G be a reductive group whose derived group is a classical group. Then, the mixed Hodge polynomial of G r 0 is given by formula (11). This motivates the following conjecture. Conjecture 5.9. For every r 1 ≥ and complex reductive G, formula (11) holds for G r 0 . GL n, ( ) and SL n, ( ) cases The case of G GL n, = ( ) is instructive, where the Weyl group is just the symmetric group, denoted by S n . If X is a variety, we denote its n-fold symmetric product by X n ( ) or by Sym X X S n n n ( ) = / . As a set, Sym X n ( ) is the set of unordered n-tuples of (not necessarily distinct) elements of X. Let M σ denote a n n × permutation matrix (in some basis) corresponding to σ S n ∈ and let I n be the n n × identity matrix. ( ) / ≅ ( ) (see [16]), which is the space of n (unordered) points on the compact r-torus S r 1 ( ) . So our results relate also to the study of cohomology of the so-called configuration spaces on compact Lie groups. We now provide an even more concrete formula, and better adapted to computer calculations, using the relation between conjugacy classes of permutations and partitions of a natural number n, to compute the aforementioned determinants. For this, we set up some notations. Let n ∈ and n be the set of partitions of n. We denote by n a general partition in n and write it as Proof. To compute the determinant in Proposition 5.11, recall that any permutation σ S n ∈ can be written as a product of disjoint cycles (including cycles of length 1), whose lengths provide a partition of n, say n σ n 1 2 a a a n 1 2 [ ] ( ) = ⋯ . Moreover, any two permutations are conjugated if and only if they give rise to the same partition, so the conjugation class of σ uniquely determines the non-negative integers a a , , n 1 … . If σ is a full cycle σ n S 1 n = ( ⋯ ) ∈ , and M σ a corresponding matrix, by computing in a standard basis, we easily obtain the conjugation invariant expression I λM λ det 1 n σ n ( − ) = − . So, for a general permutation σ S n ∈ with cycles given by the partition n σ ( ) we have n 1 1 ( … ) = ⋯ . By considering the trivial action on * , this is a fibration of S n -varieties with trivial monodromy, since it is in fact a  T -principal bundle (and  T is a connected Lie group). Then Theorem A.1 gives us an equality of the equivariant E-polynomials: Finally, the desired formula comes from the relations in Proposition 3.9, since all varieties in consideration are round. □ Now, we turn to the computation of some E c -polynomials, which relate to some formulas obtained in [9]. ) are already present in [9]. For n 4 ≥ these formulas are new and can also be upgraded to mixed Hodge polynomials by using Remark 3.10(1) and Poincaré duality. Example 5.19. The following table gives the explicit values of δ n ( ) and p x n ( ) up to n 5 = (in each row, the ordering is preserved). All the formulas can be easily implemented in the available computer software packages (in this paper, most of our calculations were performed with GAP). For simplicity, the notation [12] refers to a partition of n 3 = with two cycles: one of length 1, another of length 2 (not a cycle of length 12). Proof. We now have ) is diffeomorphic to the cotangent bundle of the projective space n 1 − parametrizing semistable bundles over an elliptic curve of rank n and trivial determinant (see [15,40]). A combinatorial identity Appendix A Multiplicativity of the E -polynomial under fibrations In this appendix, we prove a multiplicative property of the E-polynomial under fibrations, used in Theorem 5.15. This is a consequence of the fact that the Leray-Serre spectral sequence is a spectral sequence of mixed Hodge structures. E u v E u v , , , . , where it is used to calculate the Serre polynomials of certain twisted character varieties. We detail the argument here, for the reader's convenience. First, assume that the F -action is trivial on the three spaces. The Leray-Serre spectral sequence of the fibration is a sequence of mixed Hodge structures ([27, Theorem 6.5]), and it is proved in [21, Theorem 6.1] that under the given assumptions, its second page E a b
7,861
2017-11-21T00:00:00.000
[ "Mathematics" ]
Intra-Topic Variability Normalization based on Linear Projection for Topic Classification This paper proposes a variability normalization algorithm to reduce the variability be-tween intra-topic documents for topic classi-fication. Firstly, an optimization problem is constructed based on linear variability removable assumption. Secondly, a new feature space for document representation is found by solving the optimization problem with kernel principle component analysis (KPCA). Finally, effective feature transformation is taken through linear projection. As for experiments, state-of-the-art SVM and KNN algorithm are adopted for topic classification respectively. Experimental results on a free-style conversational corpus show that the proposed variability normalization algorithm for topic classification achieves 3.8% absolute improvement for micro-F 1 measure. Introduction Topic classification is now faced with the problem of enormous variability between documents due to the exponential growth of free-style unstructured texts in recent years. This paper treats variability as differences between text documents and aims at reducing the intra-topic document variability for better topic classification. There are various factors to cause the intra-topic variability problem, such as the different language usages of different persons (Chambers, 1995;Fillmore et al., 2014). In freestyle conversations experimented in this paper, different people would use very different words to express their opinions. Therefore, documents in a same topic could be quite different because of the intra-topic variability problem. In this work, we are interested in finding a robust document representation strategy to address the intra-topic variability problem. Traditional method represents document by a high-dimensional TF-IDF vector based on the bag-of-word approach (Salton and McGill, 1986;Salton and Buckley, 1988). However, the TF-IDF feature reveals little semantic similarity information between terms, which would increase the differences between intra-topic documents when different words are used. Beyond the TF-IDF strategy, there are two class of techniques, i.e., unsupervised technique and supervised technique for document representations. The unsupervised technique includes some latent semantic analysis methods. The typical method is Latent Semantic Indexing (LSI) while the features estimated by LSI are linear combinations of the original features (Deerwester et al., 1990;Wang et al., 2013). Meanwhile, the popular Latent Dirichlet Allocation (Blei et al., 2003;Morchid et al., 2014) algorithm was proposed to represent document by a generative probabilistic model (Blei et al., 2003). Moreover, in recent years, many neural network based methods have been investigated for document representations (Hinton and Salakhutdinov, 2006;Srivastava et al., 2013;Le and Mikolov, 2014). For example, in (Le and Mikolov, 2014), a model called paragraph vector was designed to represent each document by a dense vector while the vector is trained by predicting all words in the corresponding document. On the other hand, supervised technique for document representation includes some discrim-inative approaches, e.g., Linear Discriminant Analysis (Berry et al., 1995;Chakrabarti et al., 2003;Torkkola, 2004) and supervised latent semantic indexing (Sun et al., 2004;Chakraborti et al., 2007;Bai et al., 2009). Meanwhile, some improved linear analysis methods were proposed for encoding documents with a reliable similarity information (Yih et al., 2011;Chang et al., 2013). However, all those works for document representation paid little attention to the variability of intra-topic documents. Therefore, they could hardly solve the intra-topic variability problem in a direct way. This paper makes a preliminary investigation to deal with the intra-topic variability problem. The main purpose of this work is to find a new feature space with minimized intra-topic variability. An objective criterion is constructed for optimization. Mathematically, we make use of the topic label information of the training set to create a weighting matrix, and then sum over all the differences of intra-topic documents. Then a robust feature space with minimized intra-topic variability is generated by solving the optimization problem with effective KPCA based algorithm. Finally, we accomplish the variability normalization operations for the baseline features. We also employ the linear discriminant analysis as a supplementary algorithm. As for experiments, state-of-the-art SVM and KNN algorithms are employed for topic classification. System performances are evaluated on a challenging free-style conversational database. The rest of this paper is organized as follows. In section 2, we introduce the proposed variability normalization algorithm for topic classification in detail. After it, section 3 presents experimental setup and results. Finally, conclusions and future work would be given in section 4. Motivation for variability normalization This work aims to find a robust document representation strategy for topic classification. The proposed algorithm is motivated by the Nuisance Attribute Projection (NAP) algorithm in speaker verification field (Solomonoff et al., 2005;Solomonoff et al., 2007). We firstly make a linear variability removable assumption for document representation. Mathematically, given a document, it could be denoted by a column vector x with dimensionality of d as follows where x t denotes the useful signal information in current document, x v stands for the remaining noise. It is very difficult to model the noise signal in a document since it could come from various sources. Therefore, in this paper, we focus on the noise created by the variability among intra-topic documents. Our goal is to find a new document representation through linear projection: where P is the projection matrix. Since the goal of this paper is not dimensionality reduction, the dimensionality of the new document representation is the same as the source document representation. Therefore, the size of P is d × d. This paper proposes to learn P by minimizing the following intra-topic variability where w ij is the i-th row and j-th column element of a weighting matrix W created in this work. The matrix is determined by the topic label information of training set as follows w ij = 1 if x i and x j belong to a same topic 0 othervise (4) Variability normalization algorithm For deriving the variability normalization algorithm, we follow the work of (Solomonoff et al., 2007) and re-write the projection matrix P by the variability space (denoted as a unit vector v here) as follows Combining (3) and (5), we get Since the first part of Q in (6) is independent on v, we discard it and create the final criterion Unfolding (7) by linear operation, we get where X denotes the training set matrix, each row of X represents one document vector, 1 is a vector with all elements equal to 1. Minimizing (8) is equivalent to solving the flowing eigenvalue decomposition problem Here we apply the idea of KPCA (Solomonoff et al., 2007;Schölkopf et al., 1997) to solve (9). Denoting v by a new vector Xu, finding u turns to solving a generalized eigenvalue problem in kernel space as The variability space is then constructed by selecting a set of eigenvectors corresponding to the d 1 largest eigenvalues. Finally, a (d × d) projection matrix is obtained by combining (5), (11) and v = Xu. Based on this variability normalization algorithm, the baseline document vectors could be transformed to a new feature space with minimized intra-topic variability. The main procedure to implement intratopic variation normalization could be divided into the following steps: • Generate sample matrix X using the whole n documents of training set. • Construct weighting matrix W according to (4) with the use of topic label information. • Estimate a projection matrix P by solving the aforementioned eigenvalue problem. • Transform all documents to new feature space through linear projection according to (2). It should be noticed that after making feature transformation by the proposed variability normalization algorithm, the dimensionality of document representation has not been changed. This is different with all the existing dimensionality reduction methods since our goal is to re-define the feature representation space for topic document representation. To prove the effectiveness of the proposed algorithm, this paper presents experimental results on a challenging conversational dataset. Experiments In this section, we evaluate the proposed variability normalization method in a typical topic classification problem. We will firstly introduce the experimental setup, including dataset, evaluation criteria and system description. After it, all the experimental results would be reported in detail. Dataset The data set used in this paper is the text transcripts of free-style conversational speech database, Fisher English corpus released by LDC, which contains 11699 recorded conversations (Cieri et al., 2004). This corpus is collected from 40 different topics, and each document includes relatively a distinct topic (e.g. "Comedy", "Smoking", "Terrorism", etc.) as well as topics covering similar subject areas (e.g. "Airport Security", "Bioterrorism", "Issues in the Middle East"). This paper randomly chooses 60 documents and 50 documents per topic for the training set and testing set respectively. Another 50 documents for each topic are randomly selected to for the development set. Evaluation criteria We use two types of criteria to make a comprehensive evaluations for this work. The first evaluation creterion is F 1 measure corresponding to the recall and precision rates for a typical classification system. In detail, we would report micro-average F 1 and macro-average F 1 results. In consideration of topic classification is similar to topic verification, we choose equal error rate (EER) to be the second criterion, which is the equal value of miss probability and false probability. Module Methods Text processing stop-word removal, stemming Representation TF-IDF feature Classification KNN, SVM algorithm This paper constructs several systems for comparison. The configurations of our baseline system are shown in Table 1. Porter algorithm (Porter, 1980) is adopted for word stemming after stop-words removal. Then a vocabulary with 19534 unique words is determined according to the occurrence frequency information of training set. Documents in the baseline system are represented by using the popular TF-IDF term weighting strategy (Salton and Buckley, 1988). Two popular algorithms SVM and KNN are used for classification separately. The SVM classification is implemented using the LIBSVM toolkit (Chang and Lin, 2011). Based on the baseline system, descriptions of other systems are given as below. (1) LSI: documents are represented in latent semantic space estimated by the LSI algorithm (Deerwester et al., 1990) based on the baseline features. (2) LDA: document features are transformed by linear discriminant analysis. We select 50 eigenvectors for the low dimensional feature space. (3) VarNorm: document features are transformed from the baseline TF-IDF vectors by the approach proposed in this paper. We select 60 eigenvectors for generating the project matrix. (4) VarNorm-LDA: system combined VarNorm with LDA, which employs feature transformation operations twice on the original TF-IDF document features. The number of eigenvectors for VarNorm and LDA are set to 60 and 50 respectively. All the parameters suggested in this paper are tuned on the development set. However, the eigenvector number is not restricted to 50 or 60. It is recommended to set the eigenvector num from 45 to 75 since we have 40 topics for experiments. Variability normalization performance According to (3), we compare the intra-topic variability for the baseline and the VarNorm system. The difference for variability calculation is whether to use the projection matrix P or not. Figure 1 shows the intra-topic variability on 40 topics of the training set. The vertical axis represents the variability for each topic, while the horizontal axis stands for 40 topics in the conversation corpus. As we can see clearly, the variability of baseline system is high. After conducting variability normalization, it could be reduced effectively. After making detailed analysis, we find for the topic ENG06, the theme is "Hypothetical Situations: Perjury -Do either of you think that you would commit perjury for a close friend or family member?", the variability among documents from this topic is largest in the whole corpus. However, for the topic ENG13, "Movies: Do each of you enjoy going to the movies in a theater, or would you rather rent a movie and stay home? What was the last movie that you saw? Was it good or bad and why?", the variability is the lowest. This is the difference between common topics and infrequent topics. Since people would use various words to express their ideas, it is reasonable to find the variability problem is more serious for infrequent topics than common topics. Classification Results using KNN Experimental results using KNN classification algorithm are given in Table 2. The results show that, compared to the baseline system, the variability normalization system VarNorm achieves 2% absolute F 1 improvement, and 29% relative improvement for EER. When taking the variability removing as a preliminary process, and employing LDA as the secondary transformation, the system VarNorm-LDA achieves the best performance. The EER is im- proved by 65% relatively, and the micro-F 1 measure is improved by 6.85% absolutely. The reason for this performance is straightforward. Since the proposed algorithm effectively reduce the differences among intra-topic documents, the LDA algorithm would be more easier and effective to maximize the ratio of between-class-variance to within-class-variance. Classification Results using SVM Similarly, the experimental results using SVM classification algorithm are shown in Table 3. The baseline performance is better than system using KNN algorithm. The improvements achieved by LSI in KNN sytem almost vanish here, while the VarNorm system keeps its improvement. The VarNorm system even works better than the LDA system, with nearly 15% relative improvement on EER, and 3.4% absolute improvement on micro-F 1 measure. The best results are obtained by the VarNorm-LDA system. There are 36% relative improvement for EER, and 3.75% absolute improvement for micro-F 1 measure. Conclusions and Future Work In this paper, we investigated the intra-topic variability problem for topic classification. The major contribution of this work is that we proposed a effective variability normalization approach for robust document representation. An optimization problem was constructed after making a linear variability removable assumption. In order to take a deep insight into the performance of the proposed variability normalization algorithm, we conducted experiments on a challenge free-style conversation corpus. Experimental results based on the SVM and KNN classification algorithm all confirmed the robustness of the proposed approach. As a conclusion, the variability normalization algorithm could be used as a front-end feature transformation strategy, and we also suggest to combine it with linear discriminant analysis algorithm or some other algorithms to further improve system performances. Further study will investigate the adaptive methods for constructing robust feature spaces. We would also combine this work with more document representations methods as well. Moreover, it would be very interesting to extend and combine our work to some novel unsupervised machine learning techniques, like the work of (Zhang and Jiang, 2015) while they proposed a model for high-dimensional data by combineing a linear orthogonal projection and a finite mixture model under a unified generative modeling framework.
3,343.2
2016-06-01T00:00:00.000
[ "Computer Science" ]
Characteristics Analysis of the Multi-Channel Ground-Based Microwave Radiometer Observations during Various Weather Conditions : Ground-based multi-channel microwave radiometers (MWRs) can continuously detect atmospheric profiles in the tropospheric atmosphere. This makes MWR an ideal tool to supplement radiosonde and satellite observations in monitoring the thermodynamic evolution of the atmosphere and improving numerical weather prediction (NWP) through data assimilation. The analysis of product characteristics of MWR is the basis for applying its data to real-time monitoring and assimilation. In this paper, observations from the latest generation of ground-based multi-channel MWR RPG-HATPRO-G5 installed in Shanghai, China, are compared with the radiosonde observations (RAOB) observed in the same location. The detection performance, characteristics of various channels, and the accuracy of the retrieval profile products of the MWR RPG are comprehensively evaluated during various weather conditions. The results show that the brightness temperatures (BTs) observed by the ground-based MWR RPG during precipitation conditions were high, which affected its detection performance. The bias and the standard deviation (SD) between the BT observed by MWR RPG and the simulated BT during clear and cloudy sky conditions were slight and large, respectively, and the coefficient of determination ( R 2 ) was high and low, respectively. However, when the cloud liquid water (CLW) information was added when simulating BT, the bias and the SD of the observed BT and the simulated BT during cloudy days were reduced and the R 2 value improved, which indicated that CLW information should be taken into account when simulating BT during cloudy conditions. The temperature profiles of the MWR retrieval had the same accuracy of RMSEs (root-mean-square error) with heights during both clear-sky and cloudy sky conditions, where the RMSEs were below 2 K when the heights were below 4 km. In addition, the MWR RPG has the potential ability to retrieve the temperature inversion in the boundary layer, which has important application value for fog and air pollution monitoring. Introduction High spatial and temporal resolution atmospheric profiles are important for understanding the thermal and dynamical structure of weather processes at various scales [1].At present, radiosonde observations (RAOB) can provide highly precise vertical atmospheric profiles.However, due to the high observation cost, the low spatial resolution, the distance between weather stations of about 300 km or more, and the low temporal resolution (12 h), radiosonde observations are not sufficient to provide fine variations of vertical atmospheric profiles [2].Meteorological satellites are affected by the complex surface, resulting in atmospheric profiles near the ground that are not very good.The ground-based microwave radiometer (MWR) is a new type of detection equipment belonging to passive remote sensing, and mainly measures the downward radiance from the Earth's atmosphere.It can continuously detect temperature and humidity profiles of the atmosphere at 0-10 km altitude near the ground, which is a useful supplement to radiosonde data and satellite data [3].Meanwhile, ground-based MWRs represent important areas for the development of atmospheric sounding operations, given the great potential of ground-based vertical sounding information in improving small-scale and medium-scale forecast operations [4].At present, ground-based MWR is mainly dominated by RPG-HATPRO and MP-3000, but ground-based MWR has developed rapidly in China in recent years.The main devices include QFW-6000, MWP967KV, and HT-GMWR.The related profile products are widely used in the fields of artificial weather impact, meteorological protection services for major events, aviation meteorology, urban pollution monitoring, etc. With the significant increase in the application of multi-channel ground-based MWRs in meteorological operations, it is important to examine the detection performance of this remote sensing equipment.Monitoring the observed brightness temperature (BT) minus the background simulated BT is critical to detect and possibly eliminate any systematic errors in MWR measurements, radiative transfer models, or NWP model predictions [5].De Angelis et al. [5] and Liu [6] analyzed ground-based MWR BT detection and compared it with simulated BT during clear sky conditions.Ahn et al. [7] compared the simulated BT and the observed BT of MWR during all sky and clear sky conditions.It was found that liquid water in clouds caused a large bias between the observed and the simulated data.It was considered that the cloudy data were not suitable for evaluating the performance of the radiometer, and it was necessary to use the cloudy detection instrument to screen the BT data of the radiometer when there was cloud.However, what has not been considered in the existing research is the effect of adding cloud liquid water (CLW) information to the simulated BT on the observed BT evaluation and the reliability of observed BT during precipitation. The MWR receives the radiation emitted from the atmosphere in the microwave band of the spectrum and retrieves it into variables such as temperature, relative humidity, and absolute humidity.A lot of research has been conducted on the retrieval methods of ground-based MWRs.The common methods include Bayesian maximum probability estimation algorithm [8], one-dimensional variational retrieval method [9], statistical regression method [10], and neural network method [11].Li et al. [12] and Li et al. [13] improved the temperature and humidity profiles retrieval during clear-sky and cloudy-sky conditions.Tan et al. [14] established an atmospheric profile retrieval method based on principal component analysis and stepwise regression.It was found that the retrieved profiles captured the evolution of atmospheric conditions very well.Moreover, there are many studies evaluating the accuracy of MWR retrieval profiles.For example, Liu [6] compared the temperature profiles measured by the MWR with sounding temperature profiles.The study explored the effects of altitudes, seasons, and precipitation conditions on the performance of ground-based MWR retrieval temperature profiles.Chan [15] studied the application of ground-based MWR retrieval profiles in strong convective weather.It was found that the radiometer could provide effective information for precipitation forecasting, though there were some differences between ground-based MWR data and RAOB data.Xu et al. [16] compared MWR retrievals and RAOB data during clear-sky and cloudy-sky conditions, and analyzed MWR retrieval accuracies under low, middle, and high clouds identified by IRT.It was found that the accuracy of the retrieval profiles in the lower levels were better than those in the upper levels, and the cloudy profiles were better than those in the clear-sky.Both the temperature profile under high cloud and the vapor density profile under middle cloud had high accuracy.Bedoya-Velásquez et al. [17] used RAOB data to analyze the seasonal periodicity of temperature profiles, relative humidity profiles, and integrated water vapor measured by MWRs.It was found that the biases of the radiometer were small during clear sky and dry conditions, but they were very large during cloudy sky conditions. 3 of 17 MWR soundings are found to be equivalent in accuracy to radiosonde soundings when used for NWP [18].Several studies using MWR retrieval profiles have shown that they have the potential to improve the forecasting of local weather processes.For example, Olivier et al. [19] used the 3DVAR data assimilation system of Arome-WMed to assimilate the temperature and the humidity profiles of 13 ground-based MWRs.It was found that, in addition to limited improvement in the prediction of large accumulated rainfall, the prediction impact on other upper air and surface meteorological elements was usually neutral.He et al. [20] used WRFDA to assimilate the temperature and humidity profiles of two ground-based MWRs for a heavy rainfall event in Beijing, China.The results showed that the assimilation of ground-based MWR data improved the forecast of precipitation intensity and distribution in the early stage of precipitation, while the assimilation of two ground-based MWRs improved the forecast of a large area heavy precipitation system less as the storm system developed.Qi et al. [21] and Qi et al. [22] used the rapid-refresh multiscale analysis and prediction system-short term to predict the intensity and distribution of precipitation over a large area using the 3DVAR assimilation technique in the Beijing area.The level-2 products of five ground-based MWRs were assimilated to improve precipitation and echo forecasting, and accurately forecast the band echo splitting and precipitation enhancement process in Beijing.Wave et al. [18] described an example of improving fog prediction based on variational assimilation of radiometric soundings.Temimi et al. [23] evaluated the potential of MWR for nowcasting of fog formation and dissipation in hyperarid environments by analyzing its profiles. In this paper, the 2017-2019 observations of HATPRO-G5, the latest generation of RPG ground-based multi-channel MWR installed in Shanghai, China (31.39 • N, 121.44 • E, 5.5 m above sea level), are compared with the RAOB data to comprehensively evaluate its detection performance, operational stability, and the performance of the temperature retrieval profiles during various weather conditions.The study aims to lay the foundation for the application of this ground-based MWR BT data and temperature profile retrieval product. The article has been organized in the following way.Section 2 presents the data and methods used in the study, including the ground-based MWR data, data pre-processing process, and the methods of statistical analysis.The analysis of the observed BT characteristics during various weather conditions is described in Section 3. Section 4 discusses the effect of CLW on BT simulations.Section 5 describes the accuracy of temperature retrieval profiles and the performance of temperature inversion retrieval.Finally, Section 6 summarizes the conclusions. Data Description The output data of the new generation of MWR RPG includes level-1 data (BTs) and level-2 data (retrieved products).For level-1 data, seven channels of the RPG humidity profiler were selected with frequencies respectively centered at 22.24, 23.04, 23.84, 25.44, 26.24, 27.84, and 31.40GHz and seven channels of the RPG temperature profiler were selected with frequencies respectively centered at 51.26, 52.28, 53.86, 54.94, 56.66, 57.30, and 58.00 GHz.For the remote sensing mechanism of the MWR, when it is observed at the zenith, the BT at different heights is related to the molecular density and the temperature of its corresponding layer.Since the spectral center position of any given channel is different, the individual transparency for each channel is, thus, different and the microwave signals reaching the ground reflect the temperature and humidity information at different altitudes [4].Therefore, MWR can continuously detect the temperature, humidity, and CLW profiles in the atmospheric boundary layer and troposphere in the vertical range of 0-10 km in real time.In this work, the products using MWR RPG level-2 data included atmospheric temperature profile retrieval products, all of which contained 93 altitude layers.In this paper, observations from the latest generation of ground-based multichannel MWR RPG-HATPRO-G5, installed in Shanghai, China, are compared with the RAOB data observed in the same location.To ensure data quality, liquid nitrogen calibration was carried out once every six months. Data Pre-Processing To evaluate the performance of profile products, the atmospheric temperature profiles of MWR RPG and RAOB data were processed as follows: the time of the two types of temperature profiles was kept consistent; the data of instrument detection anomalies (e.g., the missing RAOB data below 10 km) was eliminated to avoid large errors in interpolation; the two types of temperature profile information were kept highly unified.According to the functional specifications formulated by the China Meteorological Administration to regulate the standards of ground-based MWRs, the vertical height of the atmospheric temperature profiles was uniformly interpolated into 83 layers of the national standard.In addition, the temperature profiles had vertical resolutions of 25 m from the surface to 500 m, 50 m from 500 m to 2000 m, and 250 m from 2000 m to 10,000 m. In addition, since the BT information detected by the ground-based MWR is the basis for its profile retrieval and assimilation, the quality and the stability of various channels of BT data directly affect the retrieval and assimilation performance.The Monochromatic Radiative Transfer Model (MonoRTM) has been commonly used for BT simulation in microwave bands with high accuracy [7,14].Its BT simulation requires information on atmospheric temperature, humidity, and liquid water content at different altitudes as input.It can achieve higher accuracy for BT simulation in the presence of cloud coverage.In our work, the temperature and humidity profiles from RAOB and the CLW from ERA5 reanalysis data were used as input data sets for BT simulation calculations, and the same 14 channels of BT as for MWR RPG were taken as outputs for comparison. This work also classified the BT data and the temperature profiles into "Clear-Sky", "No-precipitating Cloud" and "Precipitating Cloud", according to various weather conditions.Specifically, conditions with precipitation were classified as "Precipitating Cloud", and those without precipitation were classified as "Clear-Sky" or "No-precipitating Cloud", according to the relative humidity.The condition of "Clear-Sky" related to relative humidity of the RAOB being less than 85% in all altitude layers, and that of "No-precipitating Cloud" to relative humidity of one or more layers of the RAOB data being greater than, or equal to, 85%. Methods of Statistical Analysis The characteristics of the MWR RPG observed BT were analyzed by comparing the bias and standard deviation (SD) of the MWR RPG observed BT with the MonoRTM simulated BT.The coefficient of determination (R 2 ) was used to describe the degree of fit of the observed BT to the simulated BT.Let n be the size of the sample used for comparison.bias, SD, and R 2 are given by: bias In the above three equations, O i (with i being the label of the sample) is the MWR RPG observed BT, B i represents the MonoRTM simulated BT, X i the calculation of the observed BT minus simulated BT (OMB), X means the average value of OMB, y i stands for the true value of the observed BT, y i denotes the average value of y i , and ŷi gives the MWR RPG observed BT predicted by linear fitting equation. The performance of the temperature profiles retrieved by MWR RPG was evaluated by comparing the bias and RMSE calculated by radiosonde data.Let n be the size of the sample used for comparison.The profile retrieved by MWR RPG was taken as O i .The profile detected by radiosonde was taken as B i .The calculation of bias is shown in Equation ( 1), and the RMSE is given by: Brightness Temperature Characteristics Analysis during Various Weather Conditions First of all, we analyzed the all-sky BT data of MWR RPG observations that included precipitation epochs, followed by analyzing the observed BT of non-precipitation epochs.In addition, when analyzing the observed BT performance of the no-precipitating cloud, the influence of whether or not to add CLW information for BT simulation on the evaluation of BT performance was analyzed. Characteristics Analysis of All-Sky Brightness Temperature Data All successfully matched BT data were compared and analyzed.There were 1749 BTs for each channel, and 335 of them were measured during precipitation conditions.The OMB of 14 channels was plotted over time (Figure 1).The left column of Figure 1 shows the time series plots of OMB for the seven humidity channels and the right column shows the time series plots of OMB for the seven temperature channels.It can be seen that the OMB of the humidity channels was larger than that of the temperature channels, overall.Moreover, we can see that the overall change trend of OMB before and after each MWR calibration was the same, indicating that there was no significant change in the BT detection of each channel before and after the MWR calibration and the detection performance was more stable.Besides, for the all-sky BT data, except for precipitation times, OMB was mostly around zero and precipitation times of the observed BT had a large positive bias.The water vapor channels were mainly used to detect the water vapor information in the atmosphere near the ground.There was water on the waterproof cover of the equipment during precipitation, resulting in a large difference between the observed BT and the simulated BT (without considering the water accumulation).This suggested that the high BT measured by the ground-based MWR RPG during precipitation conditions affected the performance of the observation. Characteristics Analysis of Clear-Sky Brightness Temperature Data For the performance analysis of the BT of MWR RPG observations during nonprecipitation conditions, there were 368 BTs for each channel during clear-sky conditions without clouds and the time series of their OMB are shown in Figure 2. It can be seen that the range of OMB for each channel during clear-sky conditions was significantly reduced, as compared with Figure 1, where the ranges of OMB values for various channels showed discrepancies within 10 K.There were obvious positive biases for all humidity channels in the left columns and the 51.26 and the 53.86 GHz temperature channels in the right columns, i.e., the observed values were large.In addition, there was some seasonal variation in the OMB of the near-surface temperature channel during clear-sky conditions.That is, a systematic bias with time dependence was seen with larger values of OMB in winter and spring and smaller values of OMB in summer and autumn.The systematic deviation characteristic was obviously related to the ambient temperature.Since the microwave radiometer required very high thermal stability for the receiver, which should not exceed 0.02 k, it needed a temperature controller to adjust the receiver temperature through heating control.Its working performance in winter and summer should be different to some extent, which might have led to this seasonal variation characteristic.However, the exact cause needs further confirmation. Characteristics Analysis of Clear-Sky Brightness Temperature Data For the performance analysis of the BT of MWR RPG observations during nonprecipitation conditions, there were 368 BTs for each channel during clear-sky conditions without clouds and the time series of their OMB are shown in Figure 2. It can be seen that the range of OMB for each channel during clear-sky conditions was significantly reduced, as compared with Figure 1, where the ranges of OMB values for various channels showed discrepancies within 10 K.There were obvious positive biases for all humidity channels in the left columns and the 51.26 and the 53.86 GHz temperature channels in the right columns, i.e., the observed values were large.In addition, there was some seasonal variation in the OMB of the near-surface temperature channel during clear-sky conditions.That is, a systematic bias with time dependence was seen with larger values of OMB in winter and spring and smaller values of OMB in summer and autumn.The systematic deviation characteristic was obviously related to the ambient temperature.Since the microwave radiometer required very high thermal stability for the receiver, which should not exceed 0.02 k, it needed a temperature controller to adjust the receiver temperature through heating control.Its working performance in winter and summer should be different to some extent, which might have led to this seasonal variation characteristic.However, the exact cause needs further confirmation.To further evaluate the difference between the observed BT of MWR RPG and the simulated BT of MonoRTM during clear-sky conditions, the bias, SD, linear fit equation, and R 2 of the observed BTs and the simulated BTs at clear-sky times were calculated, and scatter plots were made for each channel (Figure 3).It can be seen that, except for the 52.28, 56.66, 57.3, and 58 GHz channels, all other channels had small positive biases with the highest bias (1.55 K) located at the 51.26 GHz channel.The bias of each channel showed that there was a systematic bias for further validation.The channel of 22.24-23.84GHz had a larger SD (>2 K) as compared with the other channels, where the data were more discrete with a larger difference between the observed BT and simulated BT.In addition, the fit coefficients of all channels were high, with the R 2 values above 0.99.Especially for the 54.94-58 GHz channels, the R 2 reached 1, implying that the observed BTs of MWR RPG during clear-sky conditions were in good agreement with the simulated BTs of MonoRTM.It should be noted that the overall fitting coefficients varied somewhat among channels, which was related to the absorption lines of each channel.For the water vapor and oxygen absorption peak regions in the K and V bands, the corresponding detection heights were low and the fitting coefficients were high, especially for the 54.94-58 GHz channel in the V band.However, for the absorption valley area, due to the high detection height, weak energy signal, and low R 2 , the difference and fluctuation of measured and simulated BTs were large.Overall, the observed BT performance of the temperature channels was better than that of the humidity channels in terms of the bias, SD, and R 2 of the observed BTs.To further evaluate the difference between the observed BT of MWR RPG and the simulated BT of MonoRTM during clear-sky conditions, the bias, SD, linear fit equation, and of the observed BTs and the simulated BTs at clear-sky times were calculated, and scatter plots were made for each channel (Figure 3).It can be seen that, except for the 52.28, 56.66, 57.3, and 58 GHz channels, all other channels had small positive biases with the highest bias (1.55 K) located at the 51.26 GHz channel.The bias of each channel showed that there was a systematic bias for further validation.The channel of 22.24-23.84GHz had a larger SD (>2 K) as compared with the other channels, where the data were more discrete with a larger difference between the observed BT and simulated BT.In addition, the fit coefficients of all channels were high, with the values above 0.99.Especially for the 54.94-58 GHz channels, the reached 1, implying that the observed BTs of MWR RPG during clear-sky conditions were in good agreement with the simulated BTs of MonoRTM.It should be noted that the overall fitting coefficients varied somewhat among channels, which was related to the absorption lines of each channel.For the water vapor and oxygen absorption peak regions in the K and V bands, the corresponding detection heights were low and the fitting coefficients were high, especially for the 54.94-58 GHz channel in the V band.However, for the absorption valley area, due to the high detection height, weak energy signal, and low , the difference and fluctuation of measured and Characteristics Analysis No-Precipitating Cloudy Brightness Temperature Data For the performance analysis during non-precipitation conditions, there were 1046 BTs for each channel during no-precipitating cloudy conditions.The time series plots of OMB are shown in Figure 4.It can be seen that the range of OMB for each channel during no-precipitating cloudy conditions was significantly smaller as compared with the all-sky BT (Figure 1), but most of the channels had a larger range of OMB as compared with the clear-sky BT (Figure 2).The OMB value ranges of various channels were slightly different.All channels, except for 56.66-58GHz, had obvious positive biases, and most of their OMB values were concentrated within ±10 K.The OMB values of 56.66-58 GHz channels were concentrated within ±1 K.In addition, consistent with the clear sky, there was some seasonal variation in the OMB of the near-surface temperature sounding channels during cloudy conditions.clear-sky BT (Figure 2).The OMB value ranges of various channels were slightly different.All channels, except for 56.66-58GHz, had obvious positive biases, and most of their OMB values were concentrated within ±10 K.The OMB values of 56.66-58 GHz channels were concentrated within ±1 K.In addition, consistent with the clear sky, there was some seasonal variation in the OMB of the near-surface temperature sounding channels during cloudy conditions.In order to accurately analyze discrepancies between the observed BT of MWR RPG and the simulated BT of MonoRTM during no-precipitating cloudy conditions, the bias, SD, linear fit equation, and of the observed BTs and the simulated BTs during cloudy conditions were calculated and scatter plots were made for each channel (Figure 5).It can be seen that, except for the 56.66-58GHz channels, the scatter distributions of the other channels were more discrete and all of them had positive biases.Furthermore, the bias and SD were larger than the values during clear-sky conditions, where the largest bias and SD occurring at the 51.26 GHz channel were 5.2994 K and 7.3139 K, respectively.In addition, the values of all channels were above 0.9.Especially, the values of the 54.94-58 GHz channels reached one.As compared with the clear-sky conditions, the values of some channels were slightly lower but still remained at a high level.Overall, the In order to accurately analyze discrepancies between the observed BT of MWR RPG and the simulated BT of MonoRTM during no-precipitating cloudy conditions, the bias, SD, linear fit equation, and R 2 of the observed BTs and the simulated BTs during cloudy conditions were calculated and scatter plots were made for each channel (Figure 5).It can be seen that, except for the 56.66-58GHz channels, the scatter distributions of the other channels were more discrete and all of them had positive biases.Furthermore, the bias and SD were larger than the values during clear-sky conditions, where the largest bias and SD occurring at the 51.26 GHz channel were 5.2994 K and 7.3139 K, respectively.In addition, the R 2 values of all channels were above 0.9.Especially, the R 2 values of the 54.94-58 GHz channels reached one.As compared with the clear-sky conditions, the R 2 values of some channels were slightly lower but still remained at a high level.Overall, the systematic bias of the BTs observed by MWR RPG during the no-precipitating cloud was larger and the correlation with the simulated BTs was lower compared with clear-sky. Effect of Cloud Liquid Water on Brightness Temperature Simulation The radiative transfer model includes a parameterization of CLW absorption that can be selectively used for non-scattering microwave simulations.The implementation of calculating the optical depths based on the input CLW mass mixing ratio follows Turner, Kneifel, and Cadeddu [24].For the no-precipitating cloudy conditions, since there are large differences between the observed BT of the MWR RPG and the simulated BT of MonoRTM, our work compared these two types of BT with the addition of CLW. Figure 6 shows the sequences of the differences between the observed and simulated BTs (with CLW).Except for the 54.94-58 GHz channel, Figure 6 shows that the positive biases of OMBs decreased, and OMBs with CLW were closer to 0 than OMBs without CLW.It shows that adding liquid water information could significantly reduce the biases caused by cloud influence.In addition, the addition of CLW information could make the seasonal variation of the OMB of the near-surface temperature sounding channels more obvious. Effect of Cloud Liquid Water on Brightness Temperature Simulation The radiative transfer model includes a parameterization of CLW absorption that can be selectively used for non-scattering microwave simulations.The implementation of calculating the optical depths based on the input CLW mass mixing ratio follows Turner, Kneifel, and Cadeddu [24].For the no-precipitating cloudy conditions, since there are large differences between the observed BT of the MWR RPG and the simulated BT of MonoRTM, our work compared these two types of BT with the addition of CLW. Figure 6 shows the sequences of the differences between the observed and simulated BTs (with CLW).Except for the 54.94-58 GHz channel, Figure 6 shows that the positive biases of OMBs decreased, and OMBs with CLW were closer to 0 than OMBs without CLW.It shows that adding liquid water information could significantly reduce the biases caused by cloud influence.In addition, the addition of CLW information could make the seasonal variation of the OMB of the near-surface temperature sounding channels more obvious.In the presence of clouds, to accurately tell the differences between the observed BT of MWR RPG and the simulated BT with the addition of CLW information, the bias, SD, linear fit equation, and between the observed and the simulated BTs were calculated, and scatter plots were made for each channel (Figure 7).By comparing Figure 7 with Figure 5, both bias and SD for the 25.44-52.28GHz channel were reduced after adding In the presence of clouds, to accurately tell the differences between the observed BT of MWR RPG and the simulated BT with the addition of CLW information, the bias, SD, linear fit equation, and R 2 between the observed and the simulated BTs were calculated, and scatter plots were made for each channel (Figure 7).By comparing Figure 7 with Figure 5, both bias and SD for the 25.44-52.28GHz channel were reduced after adding CLW information, the systematic bias was reduced, and the dispersion was weakened as well.The largest bias and SD at the 51.26 GHz channel were 3.2023 K and 5.8595 K, respectively. For both the 31.4GHz and 51.26 GHz channels, the biases of the observed BT and the simulated BT were reduced by about 2 K and their SDs were reduced by more than 1 K after adding CLW to the simulated BT.Moreover, the R 2 of each channel was above 0.95, indicating better matches between the observed and the simulated BTs with the addition of CLW information.Overall, the addition of CLW information could reduce the bias between the observed and the simulated BTs of MWR RPG during the no-precipitating cloud, and the correlation between the two was higher.It should be noted that the outlier data points were removed using the pauta criterion in the scatter plot of no-precipitating cloudy weather conditions.It was found that the near-surface relative humidity of the sounding data at the time of removal was mostly above 90%, where fog might have occurred.This feature might provide some references for subsequent quality control before the application of MWR RPG products. Accuracy of Temperature Retrieval Profiles and Performance of Temperature Inversion Retrieval The performance of MWR RPG temperature profile retrieval products during various weather conditions was studied using sounding data.The pre-processed sample data were divided into three categories: clear-sky, no-precipitating cloud, and precipitating cloud, and their numbers were 302, 1038 and 189, respectively.The statistical analysis and the temperature inversion retrieval cases analysis were performed during various weather conditions. Accuracy of Temperature Retrieval Profiles Figure 8 shows the Bias and RMSE of temperature profiles with heights, respectively, for the clear-sky, no-precipitating cloud, and precipitating cloud.As can be seen from Figure 8, the trends of Bias and RMSE of the temperature profiles were consistent for the three weather conditions.Among them, the atmospheric temperature profiles retrieved by MWR RPG with precipitation had obvious positive biases at all heights, and RMSE values were all within 4 K.The biases of the other two atmospheric temperature profiles were around zero below 2 km height, and the RMSEs were below 2 K when the heights were below 4 km.In addition, the overall RMSE of the temperature profiles for all three weather conditions increased with height, so the retrieval accuracy of the MWR decreased with height.From the statistical analysis, the MWR had almost the same temperature profile retrieval accuracy during clear-sky and no-precipitating cloudy conditions, but the retrieval accuracy decreased significantly during precipitation.Therefore, the temperature profiles of MWR RPG retrieval, except for precipitation conditions, could be applied to the assimilation system with high reliability. Accuracy of Temperature Retrieval Profiles and Performance of Temperature Inversion Retrieval The performance of MWR RPG temperature profile retrieval products during various weather conditions was studied using sounding data.The pre-processed sample data were divided into three categories: clear-sky, no-precipitating cloud, and precipitating cloud, and their numbers were 302, 1038 and 189, respectively.The statistical analysis and the temperature inversion retrieval cases analysis were performed during various weather conditions. Accuracy of Temperature Retrieval Profiles Figure 8 shows the Bias and RMSE of temperature profiles with heights, respectively, for the clear-sky, no-precipitating cloud, and precipitating cloud.As can be seen from Figure 8, the trends of Bias and RMSE of the temperature profiles were consistent for the three weather conditions.Among them, the atmospheric temperature profiles retrieved by MWR RPG with precipitation had obvious positive biases at all heights, and RMSE values were all within 4 K.The biases of the other two atmospheric temperature profiles were around zero below 2 km height, and the RMSEs were below 2 K when the heights were below 4 km.In addition, the overall RMSE of the temperature profiles for all three weather conditions increased with height, so the retrieval accuracy of the MWR decreased with height.From the statistical analysis, the MWR had almost the same temperature profile retrieval accuracy during clear-sky and no-precipitating cloudy conditions, but the retrieval accuracy decreased significantly during precipitation.Therefore, the temperature profiles of MWR RPG retrieval, except for precipitation conditions, could be applied to the assimilation system with high reliability. Performance of the Temperature Inversion Retrieval In the retrieval of temperature profiles, the temperature inversion characteristic is not easy to be retrieved.The study selected temperature profiles with temperature inversions during clear-sky and no-precipitating cloudy weather conditions to explore the performance of MWR RPG temperature retrievals.Figure 9 shows the comparison of the Performance of the Temperature Inversion Retrieval In the retrieval of temperature profiles, the temperature inversion characteristic is not easy to be retrieved.The study selected temperature profiles with temperature inversions during clear-sky and no-precipitating cloudy weather conditions to explore the performance of MWR RPG temperature retrievals.Figure 9 shows the comparison of the atmospheric temperature profiles of MWR RPG and RAOB for three cases of clear-sky conditions (Figure 9a-c) and no-precipitating cloudy conditions (Figure 9d-f).In Figure 9, the correlation coefficients (r) between the temperature profiles from the RPG retrieval and RAOB data were above 0.9, which indicated good agreements overall between these two profiles for these studied time periods.MWR RPG retrieval of the temperature inversions below 1 km (Figure 9a,e), had smoothed out amplitude and sharpness.When the atmospheric layer where the temperature inversion occurred was shallow (Figure 9b), or there was a double-layer temperature inversion (Figure 9d), the temperature inversions could not be well retrieved by MWR RPG.MWR RPG had limited retrieval ability for temperature inversion characteristics above 1 km, which even increased the error of retrieval of temperatures in the middle and upper troposphere (Figure 9c,f).Overall, MWR RPG had the potential ability to retrieve the temperature inversions in the boundary layer, which has important application value in fog and air pollution monitoring. Atmosphere 2022, 13, x FOR PEER REVIEW 15 of 18 atmospheric temperature profiles of MWR RPG and RAOB for three cases of clear-sky conditions (Figure 9a-c) and no-precipitating cloudy conditions (Figure 9d-f).In Figure 9, the correlation coefficients (r) between the temperature profiles from the RPG retrieval and RAOB data were above 0.9, which indicated good agreements overall between these two profiles for these studied time periods.MWR RPG retrieval of the temperature inversions below 1 km (Figure 9a, e), had smoothed out amplitude and sharpness.When the atmospheric layer where the temperature inversion occurred was shallow (Figure 9b), or there was a double-layer temperature inversion (Figure 9d), the temperature inversions could not be well retrieved by MWR RPG.MWR RPG had limited retrieval ability for temperature inversion characteristics above 1 km, which even increased the error of retrieval of temperatures in the middle and upper troposphere (Figure 9c,f).Overall, MWR RPG had the potential ability to retrieve the temperature inversions in the boundary layer, which has important application value in fog and air pollution monitoring. Conclusions This study compared the 2017-2019 observations of RPG-HATPRO-G5 (the latest generation of ground-based multi-channel MWR installed in Shanghai, China) with the Conclusions This study compared the 2017-2019 observations of RPG-HATPRO-G5 (the latest generation of ground-based multi-channel MWR installed in Shanghai, China) with the RAOB.The goal of this study was twofold.First of all, we compared the simulated BT of RAOB and the observed BT of MWR RPG during all-sky conditions, including precipitation times, followed by comparing them during non-precipitation conditions.In addition, when analyzing the observed BT performance during no-precipitating cloudy conditions, the impact of adding CLW information to BT simulation on evaluating BT performance was analyzed.Secondly, the accuracy of MWR RPG temperature profiles during various weather conditions was studied using RAOB data.For this purpose, this study carried out statistical analysis and temperature inversion retrieval cases analysis. There was no significant change in the BT of each channel before and after the MWR calibration, so MWR RPG can operate stably.The detection performance of the nearsurface temperature detection channels (56.66-58GHz) was excellent compared with other channels, but the long-term time series analysis of these channels could reveal the seasonal variation bias characteristics.In addition, MWR RPG observed BTs were significantly higher during precipitation conditions.The simulated BTs and the observed BTs of MWR RPG matched well when clear-sky conditions were taken into account, but they did not match well in conditions of no-precipitating cloud.With the CLW added into the simulated BTs the discrepancies were reduced, as compared to RPG observed BTs, making both observed and simulated BTs more consistent.Specifically, by adding CLW information to the simulated BT, the bias and the SD of the observed BT and the simulated BT reduced and the R 2 value improved. Regarding the retrieved temperature profiles from MWR RPG observed BT data, the statistical analysis showed that the performances for the clear-sky and no-precipitation conditions were better, as compared with RAOB, while the performance was worse for the case of precipitating cloudy conditions.The temperature profiles of the MWR RPG retrieval had the same accuracy of RMSEs with heights during both clear-sky and cloudysky conditions, where the RMSEs were below 2 K when the heights were below 4 km.The r between the temperature profiles from the MWR RPG retrievals and the RAOB profiles were above 0.9.Moreover, the MWR RPG can retrieve temperature inversions, which are below 1km, single layer and not shallow. The ground-based MWR can provide BT observation data with high temporal resolution throughout the day as well as atmospheric profile retrieval products.Therefore, it plays an important role in monitoring the occurrence and development of important weather processes.This study can lay a certain foundation for the retrieval and the assimilation applications of ground-based MWR observations. Figure 1 . Figure 1.Time series diagrams of OMB of 14 channels, where the pink dots are precipitation times, the blue dots stand for non-precipitation times, and the red dashed lines represent the dates of MWR calibration (7 November 2017, 28 August 2018, 17 April 2019, 25 July 2019, and 6 December 2019). Figure 1 . Figure 1.Time series diagrams of OMB of 14 channels, where the pink dots are precipitation times, the blue dots stand for non-precipitation times, and the red dashed lines represent the dates of MWR calibration (7 November 2017, 28 August 2018, 17 April 2019, 25 July 2019, and 6 December 2019). Atmosphere 2022 , 18 Figure 2 . Figure 2. Time series diagrams of OMB of 14 channels in the clear sky (the red dashed lines are the dates of MWR calibration). Figure 2 . Figure 2. Time series diagrams of OMB of 14 channels in the clear sky (the red dashed lines are the dates of MWR calibration). Figure 4 . Figure 4. Time series diagrams of OMB of 14 channels in the no-precipitating cloudy weather (the red dashed lines are the dates of MWR calibration). Figure 4 . Figure 4. Time series diagrams of OMB of 14 channels in the no-precipitating cloudy weather (the red dashed lines are the dates of MWR calibration). Atmosphere 2022 , 13, x FOR PEER REVIEW 10 of 18 systematic bias of the BTs observed by MWR RPG during the no-precipitating cloud was larger and the correlation with the simulated BTs was lower compared with clear-sky. Figure 6 . Figure 6.Time series diagrams of OMB (with CLW) of 14 channels in the no-precipitating cloudy weather (the red dashed lines are the dates of MWR calibration). Figure 6 . Figure 6.Time series diagrams of OMB (with CLW) of 14 channels in the no-precipitating cloudy weather (the red dashed lines are the dates of MWR calibration). Figure 8 . Figure 8. Variation diagrams of temperature profile Bias (a) and RMSE (b) with height for conditions of clear-sky (black), no-precipitating cloud (blue) and precipitating cloud (red). Figure 8 . Figure 8. Variation diagrams of temperature profile Bias (a) and RMSE (b) with height for conditions of clear-sky (black), no-precipitating cloud (blue) and precipitating cloud (red). Figure 9 . Figure 9. Temperature profiles in various weather conditions, where the blue line and the red line represent the MWR retrieval profiles (RPG) and the radiosonde observation profiles (RAOB), respectively.Temperature profiles observed at 1200 UTC on 17 April 2019 (a), 0000 UTC on 15 June 2019 (b), 0000 UTC on 27 December 2019 (c), 1200 UTC on 24 April 2019 (d), 0000 UTC on 16 June 2019 (e), and 1200 UTC on 18 September 2019 (f). Figure 9 . Figure 9. Temperature profiles in various weather conditions, where the blue line and the red line represent the MWR retrieval profiles (RPG) and the radiosonde observation profiles (RAOB), respectively.Temperature profiles observed at 1200 UTC on 17 April 2019 (a), 0000 UTC on 15 June 2019 (b), 0000 UTC on 27 December 2019 (c), 1200 UTC on 24 April 2019 (d), 0000 UTC on 16 June 2019 (e), and 1200 UTC on 18 September 2019 (f).
9,364.6
2022-09-23T00:00:00.000
[ "Environmental Science", "Physics" ]
Upper versus lower airway microbiome and metagenome in children with cystic fibrosis and their correlation with lung inflammation Objective Airways of children with cystic fibrosis (CF) harbor complex polymicrobial communities which correlates with pulmonary disease progression and use of antibiotics. Throat swabs are widely used in young CF children as a surrogate to detect potentially pathogenic microorganisms in lower airways. However, the relationship between upper and lower airway microbial communities remains poorly understood. This study aims to determine (1) to what extent oropharyngeal microbiome resembles the lung microbiome in CF children and (2) if lung microbiome composition correlates with airway inflammation. Method Throat swabs and bronchoalveolar lavage (BAL) were obtained concurrently from 21 CF children and 26 disease controls. Oropharyngeal and lung microbiota were analyzed using 16S rRNA deep sequencing and correlated with neutrophil counts in BAL and antibiotic exposure. Results Oropharyngeal microbial communities clustered separately from lung communities and had higher microbial diversity (p < 0.001). CF microbiome differed significantly from non-CF controls, with a higher abundance of Proteobacteria in both upper and lower CF airways. Neutrophil count in the BAL correlated negatively with the diversity but not richness of the lung microbiome. In CF children, microbial genes involved in bacterial motility proteins, two-component system, flagella assembly, and secretion system were enriched in both oropharyngeal and lung microbiome, whereas genes associated with synthesis and metabolism of nucleic acids and protein dominated the non-CF controls. Conclusions This study identified a unique microbial profile with altered microbial diversity and metabolic functions in CF airways which is significantly affected by airway inflammation. These results highlight the limitations of using throat swabs as a surrogate to study lower airway microbiome and metagenome in CF children. Method Throat swabs and bronchoalveolar lavage (BAL) were obtained concurrently from 21 CF children and 26 disease controls. Oropharyngeal and lung microbiota were analyzed using 16S rRNA deep sequencing and correlated with neutrophil counts in BAL and antibiotic exposure. Results Oropharyngeal microbial communities clustered separately from lung communities and had higher microbial diversity (p < 0.001). CF microbiome differed significantly from non-CF controls, with a higher abundance of Proteobacteria in both upper and lower CF airways. Neutrophil count in the BAL correlated negatively with the diversity but not richness of the lung microbiome. In CF children, microbial genes involved in bacterial motility proteins, two-component system, flagella assembly, and secretion system were enriched in both oropharyngeal and lung microbiome, whereas genes associated with synthesis and metabolism of nucleic acids and protein dominated the non-CF controls. PLOS Introduction potentially pathogenic microorganisms that may be present in lower airways. However, studies comparing throat swabs with lower airway cultures using culture-based methods have shown variable sensitivity and specificity when compared to bronchoalveolar lavage (BAL) culture, the gold standard [31][32][33][34]. Thus, the relationship between upper and lower airway microbial communities remains poorly understood. It is not known to what extent oropharyngeal microbial communities sampled using throat swabs resemble the microbial populations in the lungs of CF children. The objective of this study was to compare oropharyngeal and lung microbiome sampled concurrently using throat swabs and BAL, and examine the relationship between lung microbiome and severity of airway inflammation and antibiotic exposure. Using 16S rRNA sequencing and PICRUSt [35] (a bioinformatics tool for predicting metagenome functional content from marker gene surveys), we showed that oropharyngeal microbiome of CF children do not reflect the microbial composition and community structure of their lung microbiota, and that CF airway microbiome and its predictive functions differ significantly from that of non-CF controls. Study population Children with or without CF who underwent clinically indicated bronchoscopy and BAL were recruited for the study between June 2012 and November 2013. The clinical indication for bronchoscopy in CF patients was to evaluate airway inflammation and mucus plugging and to obtain brochoalveolar lavage culture to guide antibiotic therapy. The indication for bronchoscopy in non-CF patients was chronic cough and/or wheezing of unknown etiology. At the time of bronchoscopy, some of the CF subjects were hospitalized and were receiving IV antibiotics, and others were on oral antibiotics or have been recently treated with antibiotics. Some subjects had no recent exposure to antibiotics. Informed consent was obtained from all patients or their parents (depending on patient's age) for study participation and procedures. Electronic medical records were reviewed to evaluate disease severity. The study was approved by the Institutional Review Board at the University of Florida. Bronchoscopy and specimen collection Although older children could expectorate to produce sputum, throat swab was used in all subjects for consistency because it could be applied to the entire age range of all subjects. Throat swabs were obtained immediately prior to bronchoscopy from all patients by passing two cotton swabs across the posterior pharynx simultaneously. One sample swab was sent for routine bacterial culture and susceptibility testing, and the other was frozen for subsequent microbiome analysis. Flexible bronchoscopy was performed in the majority of patients in a bronchoscopy suite under conscious sedation. After a patient was adequately sedated, a bronchoscope was passed through right or left nasopharynx to oropharynx to examine the glottic structures. Topical 1% lidocaine was then instilled on vocal cords before a scope was passed though vocal cords to examine the entire tracheobronchial tree. BAL was obtained from the pulmonary lobe of interest based on radiological and/or bronchoscopic findings. BAL was performed by injecting three separate aliquots of normal saline (1 ml/kg) after wedging in a lobar or segmental bronchus and suctioning the aliquot into a trap cup. To minimize contamination from the upper airway, suctioning of upper airways was avoided to prevent entry of upper airway microorganisms into the suctioning channel before entering the lower airways. In addition, tight wedging of the scope in the lower airways was performed before instilling saline to avoid contaminating the surface of the scope with microorganisms that may have been picked up from the upper airways. For a few subjects, the bronchoscopy procedure was done in the operating room under general anesthesia due to patient-related risks. In these instances, a bronchoscope was passed through an endotracheal tube or a laryngeal mask, and BAL was obtained as described above. Sample analysis DNA was extracted from throat swab or BAL samples using the MoBio PowerSoil kit according to manufacturers' instructions. Water and DNA extraction kits were used as negative controls for background 16S rRNA contamination. Bronchoalveolar lavage fluid (BAL) was centrifuged at 300 g for 10 minutes to remove human cells or cell debris prior to DNA extraction. To extract DNA from swab samples, swabs were immersed in 1x PBS, and then underwent further processing. Due to low DNA concentrations, a 2-step nested PCR was performed. Purified DNA was amplified using 16S rRNA primers 8F and 1452R, followed by amplification of the V1-V3 hypervariable region of 16S rRNA gene using barcoded primers. Amplification of the expected size was confirmed by agarose gel electrophoresis, and bands were excised and purified by gel extraction (Qiagen, Valencia, CA). DNA concentration was measured using Qubit (Invitrogen, Carlsbad, CA), and samples were pooled at an equimolar concentration and deep sequenced using the Illumina MiSeq platform (Illumina, San Diego, CA). Bioinformatic analysis Raw paired-end reads generated from Illumina MiSeq runs were processed using custom scripts written in R [36] for de-multiplexing, quality filtering, and trimming reads. Reads were filtered based on exact matches to the barcode and primer sequences with an average quality score of 30 or higher. Samples were de-multiplexed according to the combination of unique variable length barcodes (4 to 8 nt) on each paired end. In downstream analysis, barcodes and primers were trimmed. To reconstruct the original contiguous amplicon, paired end reads were joined using FLASh [37], with a minimum of 10 bp overlap. USEARCH alignment was employed with a minimum of 97% identity and 50% aligned query threshold to assign reference OTUs with taxonomic information from the Silva database to each joined read [38,39]. Reads that did not meet the filtering criteria were excluded from subsequent analysis. Rarefaction for diversity analyses was performed at an even sub-sampling depth of 15,000 sequences at 10 iterations per sample using scripts from QIIME (version 1.8.0). OTUs with 1-2 reads and six samples that did not meet the minimum required depth were removed. Sub-tables were created based on taxonomic levels and reads that aligned to negative controls were removed from subsequent analysis. The OTU table was normalized using the Wisconsin command in the vegan package. Additional statistical analyses were performed using LEfSe [40] and in R to determine differentially significant features between groups. Subsequent automated analyses, including alpha diversity measures, were calculated using formulas and were generated in R. Beta diversity analysis was carried out in QIIME (version 1.8.0) using Unifrac distance metric [41,42]. Statistical significance of clustering by group was determined with Permutational Multivariate Analysis of Variance (PERMANOVA). Prediction of metagenome functional content was performed based on 16S rRNA sequences using PICRUSt (phylogenetic investigation of communities by reconstruction of unobserved states) [35]. Subjects and samples analyzed To compare oropharyngeal and lung microbiome, we sampled the oropharynx and the lungs of 21 CF and 26 non-CF children (disease control) using throat swabs and BAL, respectively. Clinical characteristics of all subjects are shown in Table 1. The median age of CF children was higher compared to non-CF controls (8 vs 2.5 years of age; p = 0.007; Table 1). As expected, 14 of 20 (70%) BAL samples from CF children had high levels of neutrophilic inflammation, compared to 7 of 25 (28%) in non-CF controls (p = 007). Pseudomonas aeruginosa was cultured in 4 of 18 (22%) throat swabs and 2 of 21 (9.5%) BAL samples from CF children, compared to none from throat swabs or BAL cultures in non-CF controls. Similarly, S. aureus was isolated from 5 of 18 (28%) throat swabs and 8 of 21 BAL samples (38%) from CF children, compared to 2 of 24 (8%) throat swabs and 3 pf 26 (12%) BAL from non-CF controls. 8 of 21 CF children (38%) received concurrent antibiotics at the time of sampling and 13 of 20 (65%) had antibiotic exposure within 1 month prior to sampling, compared to 3 of 25 (12%) and 6 of 26 (23%), respectively, in non-CF controls (p = 0.08 and 0.007, respectively). No CF subjects received CFTR modulator therapy. Amplification of 16S rRNA gene segment was successful for 19 throat swabs and 18 BAL samples from CF patients and 23 throat swabs and 25 BAL samples from non-CF disease controls. The failure rate for amplification was comparable to a prior study [43]. Illumina sequencing of these 37 CF and 48 non-CF specimens yielded a total of 4,022,222 reads (median: 31,680; range: 3,260-323,494) spanning the V1-V3 hypervariable region of the bacterial 16S rRNA gene. Microbial diversity and community structure of oropharyngeal and lung microbiome The microbial diversity (Fig 1A) was significantly higher in the oropharyngeal microbiome (Throat) compared to the lung microbiome (BAL) in both CF and non-CF children. Species richness (Fig 1B) was significantly higher in the oropharyngeal microbiome compared to the lung microbiome in non-CF controls but not CF children. Compared to non-CF controls, the airway microbiome of CF children (both Throat and BAL) had lower microbial diversity ( Fig 1A). However, these differences were not statistically significant. Non-metric multi-dimentional scaling ordination analysis (Fig 2A) showed significant separation between oropharyngeal (Throat) and lung microbial communities (BAL) in both CF and non-CF controls (p = 0.001 and p = 0.001, respectively, PERMANOVA), indicating that oropharyngeal microbiome does not resemble the lung microbiome. Moreover, both oropharyngeal and lung microbiome differed significantly between CF and non-CF controls (Fig 2B; Throat: p = 0.004; BAL: p = 0.013; PERMANOVA), suggesting a unique airway microbiome in cystic fibrosis. Microbial composition of CF and non-CF airway microbiome We next compared the composition and distribution of bacterial taxa in the airway microbiome. The oropharyngeal communities of both CF and non-CF controls were dominated by Proteobacteria and Firmicutes, and to a lesser extent Bacteroidetes (Fig 3A). In the lung, Proteobacteria and Firmicutes also accounted for the vast majority of the lung microbial population, but Bacteroidetes were a minority population (Fig 3B). Although the aggregate microbiome between upper and lower airways appeared similar (Fig 3), the concordance between paired upper and lower airway microbiome within subjects (r>0.3; Spearman Correlation) was low, which was observed in only 4/20 (20%) CF and 7/ 24 (29%) non-CF children (Fig 4). Compared to non-CF controls, a higher abundance of Proteobacteria was observed in both upper and lower airway microbiota of CF children. While members of Streptococcaceae were prevalent in the oropharyngeal microbiome of all subjects, Pseudomonadaceae were more abundant in CF (Fig 5). Pseudomonadaceae were readily detectable in both upper and lower airways of CF (17% and 13%, respectively). Interestingly, Moraxellaceae dominated the lung microbiome of non-CF controls, which accounted for approximately 21% of the microbial community. In contrast, Moraxellaceae constituted only 1% of the CF lung microbiome. Functional profiling of airway microbial communities Compared to non-CF controls, genes associated with bacterial motility proteins, two-component system, flagella assembly were significantly more abundant in the CF BAL metagenome (Fig 6A), and genes associated with two component system, valine leucine and isoleucine degradation, and secretion system were differentially enriched in the CF oropharynx metagenome Fig 6B). For both CF and non-CF controls, upper airway metagenomes were enriched with genes involved in amino sugar and nucleotide sugar metabolism, fructose and mannose metabolism, galactose metabolism, starch and sucrose metabolism, and phosphotransferase system, whereas genes associated with amino acid biosynthesis, degradation and metabolism, and fatty acid biosynthesis were overrepresented in lower airway metagenomes (S1 Fig). Lung microbiome and airway inflammation As neutrophils contribute to the production of reactive oxygen species during inflammation, which could modulate microbial communities on mucosal surfaces such as the lungs, we used neutrophil count in the BAL as a surrogate for lung inflammation. As expected, neutrophil count in the BAL of CF children was significantly higher compared to non-CF controls ( Fig 7A). In non-CF controls, 6/7 (86%) with high neutrophil counts (>50 neutrophils/uL) had positive viral PCR, compared to 5/19 (26%) with low neutrophil counts (p<0.05). We observed a negative correlation between neutrophil counts in the BAL and microbial diversity, but not species richness (Fig 7B). However, no significant difference in BAL microbial diversity and richness was observed between children with and without prior antibiotic exposure (S2 Fig). Discussion In this study, we examined the differences between upper and lower airway microbiome communities in cystic fibrosis children and non-CF disease controls by sampling the oropharynx and lungs concurrently using throat swabs and bronchoalveolar lavage. We also examined the correlation between airway inflammation and airway microbiome in CF patients. We found significant differences between upper and lower airway microbiome in both CF and non-CF patients in terms of microbial richness and diversity as well as the composition and structure of microbial communities (i.e. bacterial taxa). In both CF and non-CF subjects, lower airway microbiome displayed lower diversity and richness with a different bacterial composition compared to their upper airways. There is a dearth of studies comparing upper airway microbiota with lower airway microbiota in children using bronchoalveolar lavage for sampling of lower airways. Kloepfer et al. showed increased richness and diversity of BAL fluid microbiome compared to nasopharyngeal samples in children undergoing clinically indicated bronchoscopy [44]. In contrast, using expectorated and induced sputum as the sampling method for lower airways, Zemanick et al. showed differences between upper and lower airway microbiome similar to our study [29]. Differences between upper and lower airways in both CF and non-CF children could be attributed to the filtering effect of upper airway structures that decreases access of bacteria to lower airways and the effect of airway clearance mechanisms (i.e. cough and ciliary motility) which clears bacteria from the lower airways. In addition, higher innate immune effectors and antimicrobial proteins in the lower airways could also contribute to these differences. Regardless of the potential mechanisms, our data clearly demonstrate that lower airways constitute not only a more sterile environment but also a separate microbial environment from upper airways. Therefore, sampling of the upper airway in CF patients (especially children) for the purpose of understanding the lower airway microbiome is not supported by the present study. Numerous studies have used culture-based methods to compare upper airway to lower airway microbes and have focused mostly on assessing the accuracy (sensitivity and specificity) of throat swab or sputum cultures in detecting specific microorganisms in lower airways that are thought to be disease causing [31][32][33]. Most of these studies indicated a high level of accuracy and therefore provide a misleading impression of similarities between upper and lower airway microbial environments, which is not supported by our results using culture-independent techniques capable of capturing the entire microbiome spectrum. Our results showed that both upper and lower airway microbiome clustered differently in CF subjects compared to that of non-CF controls, suggesting the possibility that CF airway disease is associated with a unique microbiome signature regardless of the observed differences between upper and lower airways microbiome. These differences between CF and non-CF patients do not appear to be directly related to antibiotic therapy but further studies are needed to confirm these results. Large longitudinal studies are also necessary in order to determine if airway microbiome not only differentiates between CF and non-CF subjects but also identifies different disease phenotypes and/or disease severity within the CF population. Our study was not powered to determine the correlation between disease severity as measured by lung function and airway microbiome composition. Previous studies, however, have suggested that progressive CF lung disease is associated with decreased richness and diversity of airway microbiome. Such decreased richness and diversity was most likely attributed to antibiotic treatment. To circumvent this major confounder, future studies would require analysis of CF patients at a younger age. The advent of newly approved CF disease modifying treatments using CFTR correctors and potentiators would also provide an opportunity to study the direct effect of CFTR dysfunction on airway microbiomes structure and composition. Our 16S rRNA analysis demonstrates significant inter-individual variations in taxonomic profiles within both CF and non-CF microbiota (Fig 4). However, predicted functions encoded by the airway metagenome were more conserved within each airway habitat and within groups. For example, in both CF and non-CF controls, genes involved amino sugar and nucleotide sugar metabolism, fructose and mannose metabolism, galactose metabolism, starch and sucrose metabolism, and phosphotransferase system dominated the oropharyngeal metagenome compared to the lung metagenome (S1 Fig). The abundance of these genes likely reflects the energy requirement of the resident oropharyngeal bacteria in metabolizing carbohydrates. On the other hand, gene categories related to amino acid and fatty acid biosynthesis and metabolism were more abundant in the lung metagenome. The functional differences between upper and lower airway microbiota are consistent with our observation that upper and lower airway microbiome are distinct. Compared to non-CF controls, functions related to bacterial motility proteins, two-component system, flagella assembly, and secretion system were differentially enriched in both the oropharynx and lung metagenome of CF children (Fig 6), a feature consistent with the pathogenic potentials of typical organisms colonizing the CF respiratory tract. In contrast, genes involved in the synthesis and metabolism of nucleic acids and protein dominated the airway metagenome of non-CF controls (Fig 6), which likely reflects the basic requirements of resident microbes in the respiratory tract. Going forward, elucidating the role of key pathogenic bacteria encoding virulence-related functions and understanding the basis of individual variations in microbiomes and metagenomes in CF will be essential in future studies. Using BAL neutrophil count as a surrogate marker of airway disease severity, we observed a negative correlation between neutrophil count and lower airway microbiome diversity in CF patients, suggesting that CF disease severity is associated with significant microbiome changes regardless of antibiotics treatment. Our data is consistent with recently published studies showing similar correlation [45][46][47]. We can only speculate that high neutrophil counts in lower airways disrupt microbial diversity and composition through its microbicidal effects including phagocytosis, neutrophil proteases and production of oxygen free radicals. Since neutrophils are the final effector cells in the cascades of airways immune response and since the innate immune function of CF airways is defective due to changes in mucus properties and pH of airway surface fluid, CFTR dysfunction leads to overstimulation of adaptive immune response and high neutrophil abundance. It is plausible that overabundance of neutrophils in severe CF disease eventually leads to a less diverse microbiome, but at a high price of inflammatory response that leads to airway damage. Future longitudinal studies are needed to determine if certain early changes in the pattern of lower airway microbiome can be predictive of rapidly progressive lung disease. There are several limitations in this study. First, there was a significant difference in age range between our CF and non-CF cohorts. Because microbiome is known to vary with age from infancy to school age, this could play a role in the observed differences in diversity between groups. Second, our CF cohort was not homogeneous as BAL samples were obtained in some patients during antibiotic therapy. The number of CF children without prior antibiotic therapy in our cohort was too small to allow direct comparison with controls. However, we observed no significant difference in lung microbial diversity and richness between children with and without antibiotic exposure (S2 Fig). On the other hand, antibiotic exposure should have little impact on the comparison of upper versus lower airway microbiome within individuals. Finally, we used an in-silico approach (PICRUSt) to compare functions of CF and non-CF airway metagenomes. However, biological functions could not be determined using this approach and additional metagenomics or metatranscriptomics analyses are necessary to validate our findings. In conclusion, the differences between upper and lower airway microbiome and their predicted gene functions highlight the limitations of using upper airway samples as a surrogate to study lower airway microbiome in CF patients. Our results suggest that lower airway microbiome diversity may serve as a marker of airway disease severity in CF. Going forward, airway microbiome may be valuable for identifying different CF disease phenotypes and/or predicting response of new CFTR modulators on disease course.
5,153.4
2019-09-19T00:00:00.000
[ "Medicine", "Biology" ]
Bio-Innovative Modification of Poly(Ethylene Terephthalate) Fabric Using Enzymes and Chitosan This article investigates the activation of surface groups of poly(ethylene terephthalate) (PET) fibers in woven fabric by hydrolysis and their functionalization with chitosan. Two types of hydrolysis were performed—alkaline and enzymatic. The alkaline hydrolysis was performed in a more sustainable process at reduced temperature and time (80 °C, 10 min) with the addition of the cationic surfactant hexadecyltrimethylammonium chloride as an accelerator. The enzymatic hydrolysis was performed using Amano Lipase A from Aspergillus niger (2 g/L enzyme, 60 °C, 60 min, pH 9). The surface of the PET fabric was functionalized with the homogenized gel of biopolymer chitosan using a pad–dry–cure process. The durability of functionalization was tested after the first and tenth washing cycle of a modified industrial washing process according to ISO 15797:2017, in which the temperature was lowered from 75 °C to 50 °C, and ε-(phthalimido) peroxyhexanoic acid (PAP) was used as an environmentally friendly agent for chemical bleaching and disinfection. The influence of the above treatments was analyzed by weight loss, tensile properties, horizontal wicking, the FTIR-ATR technique, zeta potential measurement and SEM micrographs. The results indicate better hydrophilicity and effectiveness of both types of hydrolysis, but enzymatic hydrolysis is more environmentally friendly and favorable. In addition, alkaline hydrolysis led to a 20% reduction in tensile properties, while the action of the enzyme resulted in a change of only 2%. The presence of chitosan on polyester fibers after repeated washing was confirmed on both fabrics by zeta potential and SEM micrographs. However, functionalization with chitosan on the enzymatically bioactivated surface showed better durability after 10 washing cycles than the alkaline-hydrolyzed one. The antibacterial activity of such a bio-innovative modified PET fabric is kept after the first and tenth washing cycles. In addition, applied processes can be easily introduced to any textile factory. Hydrolysis causes a cleavage of the ester bonds and leads to an increased number of hydroxyl and carboxyl end groups, so that hydrolyzed fabrics have better adsorption and dyeing properties.Alkaline hydrolysis of the surface of polyester fibers is one of the Polymers 2024, 16, 2532 2 of 22 most commonly used methods in the industry due to its low cost.The process is usually carried out with 4-20% KOH and NaOH at temperatures above 100 • C (130-140 • C) for more than 1 h.It results in a silk-like handle, but also in surface peeling and pitting, which has a negative effect on the strength of the fabric, so the process should be well monitored.In addition, a lot of energy and water are consumed.Some cationic surfactants and polymers (but not all) accelerate the process of alkaline hydrolysis.Originally, the commercially available surfactant Lyogen BPN (fatty acid amine amide from Sandoz) gave the best results [3], but nowadays, quaternary ammonium compounds with at least 16 C atoms, such as cetylpyridinium chloride (CPC), cetyltrimethylammonium bromide (CTAB), hexadecyltrimethylammonium chloride (HDTMAC) and benzalkonium chloride, are used as accelerators [5][6][7][8].It should be noted that in 1990, Kallay and Grancarić developed a theoretical model for the kinetics of PET decomposition as a function of temperature and the addition of accelerators [3].The optimum weight loss is between 10 and 24% with a decrease in breaking force of up to 35%, which is achieved by monitoring alkali concentration, time and temperature.Under harsher processing conditions, the pits become cracks and holes, indicating complete damage to the fabric, which then no longer has any useful properties.On the other hand, the monomers required for recycling can be obtained by complete hydrolysis of polyester [1][2][3][4][5][6][7][8]19]. As the process of alkaline hydrolysis is neither ecologically nor energetically friendly, enzymes have been researched in recent years as an alternative to chemical treatment with alkalis.Enzymes are natural and completely biodegradable protein structures and enable the selective processing of polymeric materials at low temperatures.They are biological catalysts that catalyze the polyester hydrolysis reaction under mild conditions and only functionalize the surface.In contrast to alkaline hydrolysis, the enzymes act on the fiber surface due to their size without forming pits or damaging the fiber.Various enzymes are used for the enzymatic hydrolysis of PET: lipase, esterase, cutinase, protease, papain, laccase and glycosidase, with the best effects being achieved with lipase and cutinase.However, their use has been researched mainly for films and foils and only to a limited extent for textiles [9][10][11][12][13][14][15][16][17].In addition, textiles are heterogeneous and porous, so that wetting and wicking are very different from homogenous films and foils.Under optimized conditions, lipase enzymes from various sources can improve the hydrophilicity and dyeing properties of processed PET fibers while exhibiting good mechanical properties.Among the lipases, Amano Lipase, which can hydrolyze the polyester surface even in a slightly acidic medium, should be highlighted.The effects of enzymes on textiles are very mild and not comparable to the effects of alkaline hydrolysis, so further research is needed [9][10][11][12][13][14][15][16]. Chitin is the second most abundant polysaccharide in the natural environment after cellulose.Chitin is a structural polymer of the shells of marine invertebrates, the cell walls of fungi and the skeletons of insects and is insoluble in water and most organic solvents.However, enzymatic or chemical deacetylation of chitin produces chitosan, its best-known derivative.Chitosan is a natural biopolymer consisting mainly of 2-amino-2-deoxy-Dglucopyranose units linked by ß-1,4 bonds.It is not soluble in water but is soluble in aqueous acidic solutions.In an acidic medium (Figure 1), the protonation of chitosan amino groups occurs.When alkali is added, it retains the properties of a solution up to a pH value of 6.3, after which it becomes a gel.Studies on the dissolution properties of chitosan and its activation have shown that the dissolution rate varies depending on the acid used.Acetic acid, citric acid, formic acid, 1,2,3,4-butanetetracarboxylic acid, 2,3-dihydroxybutanedioic acid, maleic acid, hydrochloric acid, hyaluronic acid and lactic acid have been used for this purpose [8,[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37].The use of chitosan in textile finishing and fiber spinning is increasing as publi awareness and EU policy have highlighted the need for new biological agents to replac existing antimicrobial agents, e.g., quaternary ammonium compounds (QACs), triclosa and others, in the development of hospital materials.Chitosan is characterized by biocom patibility, biodegradability, non-toxicity, antimicrobial, hemostatic and moisturizin properties.Due to its good mechanical and thermoplastic properties, it can be used i medicine as a material for prostheses, blood vessels and artificial organs as well as in sys tems for the controlled release of drugs and for gene delivery [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40].In medicine an healthcare, PET fibers and materials are used in areas where mechanical properties, bio inertness and durability are critical, such as surgical gowns, membranes, vascular pros theses, stents, reconstructive support meshes, ligaments, tendons and implants.Recently blends with chitosan at the polymer level have been developed, or PET has been coate with chitosan for medical applications [28][29][30][31][32][33][34][35][36][37][38][39]. The textile application of chitosan depends on the origin of the chitin, the degree o deacetylation, the molecular weight distribution, the concentration and type of acid used the pH value, the ionic character and the temperature, which all have an important influ ence on the efficiency of processing textile materials with chitosan [25][26][27].The antimicro bial efficacy of chitosan is based on the reaction of the NH3 + groups present with the neg atively charged cell membrane of the microbes, and the efficacy of chitosan is greater a low pH [20,[38][39][40][41][42].In addition, the particle size influences the dissolution rate-th smaller the particles, the faster they dissolve in the acid [38][39][40][41][42]. For all these reasons, this article investigated the alkaline hydrolysis of surfac groups of PET fibers in woven fabric at a reduced temperature with the addition of a accelerator (HDTMAC) and hydrolysis using the enzyme lipase for surface activation.I contrast to the homogeneous surfaces of PET films and foils, heterogeneous PET fabric exhibit a dual porosity-an intra-and an inter-yarn porosity-which leads to differen interfacial phenomena than with foils or films made of the same polymer.Since all finish ing treatments have to be environmentally friendly while fulfilling the main properties o the textiles and have to be durable for at least 3 to 50 washing cycles depending on the us of the textile material, the functionalization with homogenized chitosan gel prepared from submicron particles in acetic acid and its durability in modified industrial washing pro cesses were investigated. The textile application of chitosan depends on the origin of the chitin, the degree of deacetylation, the molecular weight distribution, the concentration and type of acid used, the pH value, the ionic character and the temperature, which all have an important influence on the efficiency of processing textile materials with chitosan [25][26][27].The antimicrobial efficacy of chitosan is based on the reaction of the NH 3 + groups present with the negatively charged cell membrane of the microbes, and the efficacy of chitosan is greater at low pH [20,[38][39][40][41][42].In addition, the particle size influences the dissolution rate-the smaller the particles, the faster they dissolve in the acid [38][39][40][41][42]. For all these reasons, this article investigated the alkaline hydrolysis of surface groups of PET fibers in woven fabric at a reduced temperature with the addition of an accelerator (HDTMAC) and hydrolysis using the enzyme lipase for surface activation.In contrast to the homogeneous surfaces of PET films and foils, heterogeneous PET fabrics exhibit a dual porosity-an intra-and an inter-yarn porosity-which leads to different interfacial phenomena than with foils or films made of the same polymer.Since all finishing treatments have to be environmentally friendly while fulfilling the main properties of the textiles and have to be durable for at least 3 to 50 washing cycles depending on the use of the textile material, the functionalization with homogenized chitosan gel prepared from submicron particles in acetic acid and its durability in modified industrial washing processes were investigated. Materials A commercial polyester fabric (Belira, Banja Luka, Bosnia and Herzegovina, donation), made of 100% poly(ethylene terephthalate) (PET) in a plain weave with a surface mass of 60 g/m 2 , stabilized with hot air, consisting of a textured multifilament yarn of 50 dtex, 16 f, was used for the research. Oyster mushroom (Pleurotus ostreatus) chitosan (Chibio BioTECH, Qingdao City, China), donated by Tricomed SA (Poland), was used in this research.Chitosan has a degree of deacetylation of 90%, a viscosity of 1000 cPs (mPa•s) and a molecular weight of 150 kDa.The sub-micron particles of chitosan (0.5-1 µm) were produced by milling in a Planetary Micro Mill PULVERISETTE 7 premium line (FRITSCH GmbH-Milling and Sizing, Weimar, Germany) using ceramic balls with a diameter of 20 mm for 48 min at 900 rpm. Enzymatic hydrolysis was carried out with 0.2 g/L Amano Lipase A from Aspergillus niger (Sigma-Aldrich Co.-Merck KGaA, Darmstadt, Germany) at pH 9 and 60 • C for 60 min. For the functionalization of modified PET fabrics, the homogenized gel of 3 g/L submicron particles of chitosan in 3% acetic acid water solution was used.The fabrics were impregnated (padded) in the homogenized chitosan gel solution with wet pick-up (WP) 100%, dried at 110 • C for 2 min and thermocondensed (cured) at 170 • C for 90 s [43]. PET fabrics were washed for 1 and 10 cycles in the Wascator FOM71 CLS washing machine (Electrolux, Stockholm, Sweden), applying a modified industrial washing process.According to ISO 15797:2017 Textiles-Industrial washing and finishing procedures for testing of workwear, the washing process is carried out at a temperature of 75 • C with peracetic acid [44].This standard washing process was modified by using soft water with the addition of WFK detergent code 88060 with optical brightener, followed by the addition of ε-(phthalimido) peroxyhexanoic acid (Figure 2), i.e., PAP, (Eureco LX5, Solvay, Brussels, Belgium) as an environmentally friendly agent for chemical bleaching and disinfection.Washing was carried out at a lower temperature of 50 • C, according to a specially developed program.Drying was carried out in a T5130-LAB Type A1 dryer (Electrolux, Stockholm, Sweden) for 30 min in the NORMAL LAB program. kDa.The sub-micron particles of chitosan (0.5-1 µm) were produced by milling in etary Micro Mill PULVERISETTE 7 premium line (FRITSCH GmbH-Milling and Weimar, Germany) using ceramic balls with a diameter of 20 mm for 48 min at 900 Treatment Procedure The hydrolysis was carried out by a batchwise method in stainless-steel bowl laboratory device Linitest (Original-Hanau, Hanau, Germany) with a liquor ratio Hydrolysis was carried out in 1.5 M NaOH (Gram-mol d.o.o., Zagreb, Croatia) a for 10 min with the addition of 2 g/L accelerator, cationic surfactant hexadecy thylammonium chloride (HDTMAC, 25% aqueous solution, Sigma-Aldrich Co. KGaA, Darmstadt, Germany).The samples were washed with tap water and neut with 10% acetic acid (acetic acid 80% p.a., Gram-mol d.o.o., Zagreb, Croatia). Enzymatic hydrolysis was carried out with 0.2 g/L Amano Lipase A from Asp niger (Sigma-Aldrich Co.-Merck KGaA, Darmstadt, Germany) at pH 9 and 60 °C min. For the functionalization of modified PET fabrics, the homogenized gel of 3 g micron particles of chitosan in 3% acetic acid water solution was used.The fabri impregnated (padded) in the homogenized chitosan gel solution with wet pick-u 100%, dried at 110 °C for 2 min and thermocondensed (cured) at 170 °C for 90 s [4 PET fabrics were washed for 1 and 10 cycles in the Wascator FOM71 CLS w machine (Electrolux, Stockholm, Sweden), applying a modified industrial washi cess.According to ISO 15797:2017 Textiles-Industrial washing and finishing proced testing of workwear, the washing process is carried out at a temperature of 75 ° peracetic acid [44].This standard washing process was modified by using soft wat the addition of WFK detergent code 88060 with optical brightener, followed by th tion of ε-(phthalimido) peroxyhexanoic acid (Figure 2), i.e., PAP, (Eureco LX5, Brussels, Belgium) as an environmentally friendly agent for chemical bleaching a infection.Washing was carried out at a lower temperature of 50 °C, according to cially developed program.Drying was carried out in a T5130-LAB Type A1 drye trolux, Stockholm, Sweden) for 30 min in the NORMAL LAB program.The labels and treatments of the PET fabric are listed in Table 1, and the proc diagram is shown in Figure 3.The labels and treatments of the PET fabric are listed in Table 1, and the process line diagram is shown in Figure 3. Characterization Methods The effectiveness of the implementation of chitosan in polyester fabrics before and after surface activation by hydrolysis and after the 1st and 10th washing cycle was determined using the methods mentioned below. The mass per unit area (m) in g/m 2 was determined in accordance with ISO 3801:1977 Textiles-Woven fabrics-Determination of mass per unit length and mass per unit area, using an analytical balance model ALJ 220-5DNM (KERN & Sohn GmbH, Balingen, Germany) with an accuracy of 0.0001 g.The change in mass per unit area was calculated from the values obtained. The breaking force (F) in N and the elongation (ε) as a percentage were determined according to ISO 13934-1:2013 Textiles-Tensile properties of fabrics-Part 1: Determination of maximum force and elongation at maximum force using the strip method on a Tensolab dynamometer (Mesdan S.p.A., Puegnago del Garda, Italy) in the warp direction; the distance between the clamps was 100 mm, the width of the sample was 5 cm, the bursting speed was 100 mm/min and the pretension was 2 N. The change in breaking force and elongation was calculated from the values obtained.The tensile index (TI), which represents tensile strength (TS) over mass per unit area (m), was calculated according to (1) using breaking force (F) and the width of the sample (d) for tensile strength calculation. AATCC TM 198-2020 Horizontal Wicking of Textiles was used to determine the hydrophilicity.Prior to the measurement, the fabrics were conditioned in the constant-climate chamber model KBF-S 240 E6, 9020-0366 (Binder GmbH, Tuttlingen, Germany) at 21 ± 2 °C and a relative humidity of 65 ± 5%. Samples were analyzed using Fourier transform infrared spectroscopy (FTIR, Perkin Elmer, Spectrum 100 S, Shelton, CT, USA) using the Attenuated Total Reflectance (ATR) technique.Four scans with a resolution of 4 cm −1 between 4000 cm −1 and 380 cm −1 were performed for each sample.When creating the diagrams (Perkin Elmer, Spectrum 100 software, Shelton, CT, USA), the FTIR curves were normalized at 1502 cm −1 . The SurPASS electrokinetic analyzer (Anton Paar GmbH, Graz, Austria) was used to determine the electrokinetic potential (ζ, zeta, ZP).Based on the measurement of the streaming potential, ZP was calculated according to the Helmholtz-Smoluchowski equation [45,46].It was measured as a function of the pH value (from pH 9 to pH 2) of 0.001 mol/L potassium chloride (KCl p.a., Kemika, Zagreb, Croatia) using an Adjustable Gap Cell, and the isoelectric point (IEP) was determined.For each point, the streaming potential was measured at 8 points of each sample for each pH value.The pH of the electrolyte solution was adjusted with 0.1 mol/L NaOH (Gram-mol d.o.o., Zagreb, Croatia). Characterization Methods The effectiveness of the implementation of chitosan in polyester fabrics before and after surface activation by hydrolysis and after the 1st and 10th washing cycle was determined using the methods mentioned below. The mass per unit area (m) in g/m 2 was determined in accordance with ISO 3801:1977 Textiles-Woven fabrics-Determination of mass per unit length and mass per unit area, using an analytical balance model ALJ 220-5DNM (KERN & Sohn GmbH, Balingen, Germany) with an accuracy of 0.0001 g.The change in mass per unit area was calculated from the values obtained. The breaking force (F) in N and the elongation (ε) as a percentage were determined according to ISO 13934-1:2013 Textiles-Tensile properties of fabrics-Part 1: Determination of maximum force and elongation at maximum force using the strip method on a Tensolab dynamometer (Mesdan S.p.A., Puegnago del Garda, Italy) in the warp direction; the distance between the clamps was 100 mm, the width of the sample was 5 cm, the bursting speed was 100 mm/min and the pretension was 2 N. The change in breaking force and elongation was calculated from the values obtained.The tensile index (TI), which represents tensile strength (TS) over mass per unit area (m), was calculated according to (1) using breaking force (F) and the width of the sample (d) for tensile strength calculation. AATCC TM 198-2020 Horizontal Wicking of Textiles was used to determine the hydrophilicity.Prior to the measurement, the fabrics were conditioned in the constant-climate chamber model KBF-S 240 E6, 9020-0366 (Binder GmbH, Tuttlingen, Germany) at 21 ± 2 • C and a relative humidity of 65 ± 5%. Samples were analyzed using Fourier transform infrared spectroscopy (FTIR, Perkin Elmer, Spectrum 100 S, Shelton, CT, USA) using the Attenuated Total Reflectance (ATR) technique.Four scans with a resolution of 4 cm −1 between 4000 cm −1 and 380 cm −1 were performed for each sample.When creating the diagrams (Perkin Elmer, Spectrum 100 software, Shelton, CT, USA), the FTIR curves were normalized at 1502 cm −1 . The SurPASS electrokinetic analyzer (Anton Paar GmbH, Graz, Austria) was used to determine the electrokinetic potential (ζ, zeta, ZP).Based on the measurement of the streaming potential, ZP was calculated according to the Helmholtz-Smoluchowski equation [45,46].It was measured as a function of the pH value (from pH 9 to pH 2) of 0.001 mol/L potassium chloride (KCl p.a., Kemika, Zagreb, Croatia) using an Adjustable Gap Cell, and the isoelectric point (IEP) was determined.For each point, the streaming potential was measured at 8 points of each sample for each pH value.The pH of the electrolyte solution was adjusted with 0.1 mol/L NaOH (Gram-mol d.o.o., Zagreb, Croatia). The morphological characterization of the surface of the PET fibers in fabrics was analyzed from micrographs taken with a scanning electron microscope (SEM) (FE-SEM, Mira II, LMU, Tescan, Brno, Czech Republic) with a magnification of 2000× using a combination of two detectors-SE (secondary electron detector) and BSE (backscatter electron detector).The samples were coated with a thin chromium layer for 180 s in a sputter coater Q150T ES Plus (Quorum Technologies, Laughton, UK). Results and Discussion The hydrolysis of PET fibers in fabrics was researched to find a more environmentally friendly, economical and/or energetically favorable surface activation process that simultaneously improves hydrophilicity and maintains satisfactory mechanical properties.For this research, hydrolysis was carried out in a more sustainable way: alkaline hydrolysis at a lower temperature and for a shorter time with the addition of an accelerator HDTMAC and bio-innovative hydrolysis using the enzyme lipase were performed.The functionalization of the PET surface with the biopolymer chitosan with minimal damage to the modified PET fabric was also researched. Table 2 shows the mass per unit area and the tensile properties (breaking force and elongation) of PET fabrics and their changes before and after hydrolysis, chitosan functionalization and one and ten washing cycles.In Figure 4, the tensile index of PET fabrics is presented.It can be seen that the mass of the untreated PET sample is 61.23 g/m 2 .After 10 washing cycles, there is a slight increase in mass (2.18%), probably due to wear and shrinkage of the PET fabric.Alkaline hydrolysis results in a weight loss of 7.04%, caused by the action of alkali on the surface, which leads to peeling and pitting on the surface but is within an optimal range.After 10 washing cycles, a similar slight increase in mass (1.18%) due to shrinkage of the alkaline-hydrolyzed PET fabric can be observed.The enzymatic hydrolysis did not change tensile properties significantly.After 10 washing cycles, there was a decrease in breaking force and tensile index of 5% and 2% respectively.This change may be attributed to the removal of impurities from the fiber surface resulting from the peeling of the fibers by the action of enzymes.The application of chitosan to the surface resulted in an increase in breaking force compared to the unhy drolyzed sample, confirming the presence of chitosan on the surface.The breaking force increases with the first washing cycle, while it slightly decreases after the tenth washing cycle, but it is still higher than the PET_ALA sample, which may indicate the presence o chitosan particles on the surface as well as the deposition of detergent.The results of the tensile index confirm this behavior. The most common method for determining the hydrophilic/hydrophobic properties of a solid surface is the drop test with water.When a drop of water settles on the surface a contact angle is formed.Unlike films and foils, textiles are heterogeneous and porous so it is sometimes not possible to determine the contact angle, making the wicking tes method more suitable [46][47][48].Since the purpose of PET surface activation is hydrophilic ity, the horizontal wicking test was performed according to AATCC TM 198-2020.The results are presented as the wicking rate (W), which represents the calculated water area as a function of time [mm 2 /s] (Figure 5).Enzyme hydrolysis did not result in weight loss regardless of the effect of the enzyme on the fiber surface.Lipases are biological catalysts that catalyze polyester hydrolysis under mild conditions, but due to their larger molecular size, this reaction only takes place [12,13] on the surface without damaging the fiber or causing pitting.Further SEM micrographs confirm this finding.The enzymatic action only causes a peeling of the surface, which is removed from the fiber during the washing cycles, resulting in a weight loss (1.27%) compared to PET_ALA. The functionalization of untreated and modified PET fabrics with homogenized chitosan gel leads to an increase in mass per unit area of about 4%.This was to be expected, as chitosan binds to active groups of the PET fibers and forms a chitosan layer on the surface of the PET fabric. The washing cycles do not lead to a decrease but to an increase in the mass of PET fabrics functionalized with chitosan compared to untreated PET fabric.It can be seen that the first washing cycle leads to weight loss, as the unbound chitosan has been partially removed from the surface compared to the fabrics after functionalization.However, there is a difference in mass after the tenth washing cycle, which indicates a different durability of the chitosan functionalization.For the unhydrolyzed PET fabric functionalized with chitosan, the mass is similar after 10 washing cycles (PET_Ch_10W); it stays the same as after the first washing cycle, whilst the mass of the alkali-hydrolyzed PET fabrics functionalized with chitosan (PET_H_Ch_10W) have an additional weight loss of 3%, and the ones pretreated with an enzyme (PET_ALA_Ch_10W) have an increase of 1%.The reason for this can be attributed to the removal of chitosan by mechanical and chemical activity during the washing process.It can be assumed that the alkaline medium of the detergent enhanced the effect of alkaline hydrolysis, so the weight loss is higher.As for the PET fabrics pretreated with an enzyme, as the enzyme hydrolysis occurs on the surface due to the size of enzyme [12,13], there is a possibility that chitosan is just redeposited during the washing process.The results of the tensile properties show similar behavior. Untreated PET has a breaking force of 655 N and an elongation of 29.16%.After 10 washing cycles, there were no significant changes in tensile properties, even when the alkaline detergent was used.There was a 2% change in the tensile index.Functionalization with homogenized chitosan gel has no effect on the strength of the fabric, as the change in tensile strength is only 1%.It even increases to 3.6% during the washing cycles, which indicates possible redeposition of chitosan on the surface and a possible shrinkage of the PET fabric.If the mass per unit area is taken into account when calculating the tensile index, the same behavior can be observed. As expected, alkaline hydrolysis leads to a significant decrease in breaking force by 19.39% and elongation by 8.44%.Since the acceptable loss of breaking force after hydrolysis according to the literature is up to 40% [1-4,7], it can be observed that the process was well thought out.After 10 washing cycles, a loss of 6% in tensile strength can be observed, which can be attributed to the alkali medium of the detergent.Taking into account the mass per unit area, the tensile index in alkaline hydrolysis indicates a change of 13.4%, and after 10 washing cycles, of 21.4%.By applying chitosan to its surface, the decrease in breaking force is 16.79% compared to the unhydrolyzed PET fabric but increased by 10% if compared to the alkali-hydrolyzed PET fabric.After the first washing cycle, the chitosan was removed, which is reflected in the lower breaking force compared to the PET_H_Ch sample.Considering the tensile index, the change occurs in the first washing cycle, and after tenth, it is 10% higher.This increase in breaking force and tensile index after 10 washing cycles may indicate shrinkage of the fabric or the accumulation of detergent on the surface due to the washing process. The enzymatic hydrolysis did not change tensile properties significantly.After 10 washing cycles, there was a decrease in breaking force and tensile index of 5% and 2%, respectively.This change may be attributed to the removal of impurities from the fiber surface resulting from the peeling of the fibers by the action of enzymes.The application of chitosan to the surface resulted in an increase in breaking force compared to the unhydrolyzed sample, confirming the presence of chitosan on the surface.The breaking force increases with the first washing cycle, while it slightly decreases after the tenth washing cycle, but it is still higher than the PET_ALA sample, which may indicate the presence of chitosan particles on the surface as well as the deposition of detergent.The results of the tensile index confirm this behavior. The most common method for determining the hydrophilic/hydrophobic properties of a solid surface is the drop test with water.When a drop of water settles on the surface, a contact angle is formed.Unlike films and foils, textiles are heterogeneous and porous, so it is sometimes not possible to determine the contact angle, making the wicking test method more suitable [46][47][48].Since the purpose of PET surface activation is hydrophilicity, the horizontal wicking test was performed according to AATCC TM 198-2020.The results are presented as the wicking rate (W), which represents the calculated water area as a function of time [mm 2 /s] (Figure 5).Wetting occurs first, followed by wicking, which occurs when the water penetrates the capillary formed by the capillary forces between two fibers or yarns [47,48].From the wicking rate (W) results shown in Figure 5, it can be seen that PET fabric has a wicking rate of 4.74 mm 2 /s.PET is hydrophobic as a polymer, but in the case of PET fibers in the fabric, some spreading occurs due to the capillary system.Both hydrolyses lead to a higher wetting rate, 6.31 mm 2 /s for alkaline and 6.41 mm 2 /s for lipase hydrolysis, respectively.Chitosan functionalization improves the hydrophilicity, which significantly increases after the first washing cycle.It is possible that redeposition of chitosan occurs and that the amino groups of the chitosan contribute to water spreading.It can also be seen that all washing cycles contributed to a higher wicking rate, so it can be assumed that the waterdetergent system leads to better hydrophilicity.After the 10th washing cycle, the results are lower, indicating that some amount of chitosan was washed from the surface. In Figure 6, the FTIR spectra of the PET fabric and the chitosan powder are presented, while in Figures 7-9, functionalized and washed PET fabrics' FTIR spectra are shown.Wetting occurs first, followed by wicking, which occurs when the water penetrates the capillary formed by the capillary forces between two fibers or yarns [47,48].From the wicking rate (W) results shown in Figure 5, it can be seen that PET fabric has a wicking rate of 4.74 mm 2 /s.PET is hydrophobic as a polymer, but in the case of PET fibers in the fabric, some spreading occurs due to the capillary system.Both hydrolyses lead to a higher wetting rate, 6.31 mm 2 /s for alkaline and 6.41 mm 2 /s for lipase hydrolysis, respectively.Chitosan functionalization improves the hydrophilicity, which significantly increases after the first washing cycle.It is possible that redeposition of chitosan occurs and that the amino groups of the chitosan contribute to water spreading.It can also be seen that all washing cycles contributed to a higher wicking rate, so it can be assumed that the water-detergent system leads to better hydrophilicity.After the 10th washing cycle, the results are lower, indicating that some amount of chitosan was washed from the surface. In Figure 6, the FTIR spectra of the PET fabric and the chitosan powder are presented, while in Figures 7-9 In Figures 7-9, FTIR-ATR spectra of PET fabrics before and after hydrolysis, chitosan functionalization and one and ten washing cycles are shown.In Figures 7-9, the changes in the PET fabrics pretreated with NaOH and enzymes and after several washing cycles are presented.The changes are visible by the increase in the peak at 2960 cm −1 and the decrease in the peaks at 2920 and 2850 cm −1 , which are caused by stretching within the CH2 groups in the polymer.It is clear from the spectral curves presented for all samples analyzed that the samples treated with chitosan after the pretreatment procedures show In Figures 7-9, FTIR-ATR spectra of PET fabrics before and after hydrolysis, chitosan functionalization and one and ten washing cycles are shown.In Figures 7-9, the changes in the PET fabrics pretreated with NaOH and enzymes and after several washing cycles are presented.The changes are visible by the increase in the peak at 2960 cm −1 and the decrease in the peaks at 2920 and 2850 cm −1 , which are caused by stretching within the CH 2 groups in the polymer.It is clear from the spectral curves presented for all samples analyzed that the samples treated with chitosan after the pretreatment procedures show no physicochemical changes compared to the pretreated samples, and it can be assumed that the amount of chitosan applied is too small to be detected by this method.Therefore, the samples were characterized by their zeta potential.The results are presented in Figures 10-12. no physicochemical changes compared to the pretreated samples, and it can be assumed that the amount of chitosan applied is too small to be detected by this method.Therefore, the samples were characterized by their zeta potential.The results are presented in Figures 10-12 fibers, the number of carboxyl groups is very low, which makes the surface of the fib hydrophobic.In contrast to hydrophilic surfaces, which have a higher zeta potential d to their ability to absorb water, hydrophobic surfaces have a lower zeta potential sin they cannot absorb water molecules [8,46].fibers, the number of carboxyl groups is very low, which makes the surface of the fi hydrophobic.In contrast to hydrophilic surfaces, which have a higher zeta potential d to their ability to absorb water, hydrophobic surfaces have a lower zeta potential si they cannot absorb water molecules [8,46].According to the literature, their zeta potential varies from ζ = −40 mV to ζ = −80 m depending on the structural parameters [8,54].During washing [45,55], the fabric is wo down, resulting in fiber damage and fibrillation, so that more negative groups are ava ble.However, considerable shrinkage of the fabric leads to a higher zeta potential in al line and neutral media, which is why the zeta potential is not more negative after seve washing cycles.According to that, after 10 washing cycles, the PET fabric exhibits −49.96 mV at pH 8.65, ζ = 2.28 at pH 2.38 and IEP = 2.5.The use of chitosan leads t higher zeta potential in alkaline and neutral electrolyte solutions and to a shift of the i electric point to a higher pH value.The higher zeta potential is due to the presence amino groups in chitosan [30,56] The electrokinetic analysis was performed by measuring the zeta potential (ζ) on SurPASS.From the results shown in Figure 10, it can be seen that untreated PET fabric has a zeta potential of ζ = −69.46mV at a pH of 8.02, IEP < 2. The zeta potential of PET fabric in an alkaline medium is due to its hydrophobic surface.Due to the high crystallinity of the fibers, the number of carboxyl groups is very low, which makes the surface of the fiber hydrophobic.In contrast to hydrophilic surfaces, which have a higher zeta potential due to their ability to absorb water, hydrophobic surfaces have a lower zeta potential since they cannot absorb water molecules [8,46]. According to the literature, their zeta potential varies from ζ = −40 mV to ζ = −80 mV depending on the structural parameters [8,54].During washing [45,55], the fabric is worn down, resulting in fiber damage and fibrillation, so that more negative groups are available.However, considerable shrinkage of the fabric leads to a higher zeta potential in alkaline and neutral media, which is why the zeta potential is not more negative after several washing cycles.According to that, after 10 washing cycles, the PET fabric exhibits ζ = −49.96mV at pH 8.65, ζ = 2.28 at pH 2.38 and IEP = 2.5.The use of chitosan leads to a higher zeta potential in alkaline and neutral electrolyte solutions and to a shift of the isoelectric point to a higher pH value.The higher zeta potential is due to the presence of amino groups in chitosan [30,56] 15, which indicates the removal of chitosan from the surface of the material after ten washing cycles.The enzymatically hydrolyzed sample exhibits better wash resistance compared to the alkaline-hydrolyzed and non-hydrolyzed sample, as indicated by the shift of its IEP to a higher pH value, suggesting that chitosan is not completely removed from the surface even after 10 washing cycles. In Figures 13 and 14, the micrographs of the PET fibers in fabric taken with a scanning electron microscope (SEM) at a magnification of 2000× are shown, and the morphological characterization of the surface of the PET fibers was performed.It can be seen that the PET sample has a smooth surface with some impurities on the fiber.After 10 washing cycles, the impurities are removed from the fiber surface.The application of chitosan leads to the appearance of granular structures on the surface, indicating the presence of chitosan on the fiber surface, which is confirmed by an increase in surface mass, breaking force and zeta potential.The granular structures on the surface are also visible after the first washing cycle, confirming the stability of the treatment, which was also visible by measuring the zeta potential.The PET_Ch_10W sample shows a clean, smooth structure with no particles on the fiber surface, indicating that the chitosan was removed from the fiber surface as previously confirmed.The alkaline hydrolysis results in characteristic pits on the surface of the fibers [4,7], which are caused by the action of alkali, as can also be seen from the decrease in breaking strength compared to the non-hydrolyzed sample.The application of chitosan causes it to penetrate into the pores created by the alkali, which is visible on the PET_H_Ch sample.The washing process reduces the number of chitosan particles on the fiber. The enzymatic hydrolysis led to an exfoliation of the fiber surface, resulting in more free active groups, but the strength was not affected.After 10 washing cycles, the surface of the fibers is cleaned, which correlates to the drop in the breaking force of 5%, although the peeling caused by the enzymatic action is still visible in some places.In the enzymatically hydrolyzed sample, a large number of particles are visible on the fiber surface after the application of chitosan, confirming the presence of chitosan on the fiber that remained after the first washing cycle.The number of particles is visually higher on the enzymatically hydrolyzed sample than on the alkaline-hydrolyzed sample, which correlates with the zeta potential results.After 10 washing cycles, particles are still visible on the fiber surface, which could be chitosan residues or impurities that have accumulated during the washing process.The alkaline hydrolysis results in characteristic pits on the surface of the fibers [4,7], which are caused by the action of alkali, as can also be seen from the decrease in breaking strength compared to the non-hydrolyzed sample.The application of chitosan causes it to penetrate into the pores created by the alkali, which is visible on the PET_H_Ch sample.The washing process reduces the number of chitosan particles on the fiber. The enzymatic hydrolysis led to an exfoliation of the fiber surface, resulting in more free active groups, but the strength was not affected.After 10 washing cycles, the surface of the fibers is cleaned, which correlates to the drop in the breaking force of 5%, although the peeling caused by the enzymatic action is still visible in some places.In the enzymatically hydrolyzed sample, a large number of particles are visible on the fiber surface after the application of chitosan, confirming the presence of chitosan on the fiber that remained after the first washing cycle.The number of particles is visually higher on the enzymatically hydrolyzed sample than on the alkaline-hydrolyzed sample, which correlates with the zeta potential results.After 10 washing cycles, particles are still visible on the fiber surface, which could be chitosan residues or impurities that have accumulated during the washing process.The antimicrobial activity of PET fabrics before and after hydrolysis, after functionalization with chitosan and after 1 and 10 washing cycles was determined according to EN ISO 20645:2004 against the following microorganisms: Gram-positive bacteria Staphylococcus aureus (S.aureus), Gram-negative bacteria Escherichia coli (E.coli) and microfungi Candida albicans (C.albicans).The results are presented in Table 3.The antimicrobial activity of PET fabrics before and after hydrolysis, after functionalization with chitosan and after 1 and 10 washing cycles was determined according to EN ISO 20645:2004 against the following microorganisms: Gram-positive bacteria Staphylococcus aureus (S. aureus), Gram-negative bacteria Escherichia coli (E.coli) and microfungi Candida albicans (C.albicans).The results are presented in Table 3.The evaluation of antimicrobial activity includes the observation of inhibition zones, if present, or the growth of colonies beneath the sample.If the inhibition zone is present or there are no colonies beneath the sample in the contact area, the material has antimicrobial activity.The untreated fabric shows no antimicrobial activity against the tested microorganisms.After 10 washing cycles, antimicrobial activity against S. aureus and E. coli is present, so that the applied washing process with PAP has contributed to the antibacterial activity as a disinfectant. When treated with chitosan, all fabrics showed activity against all microorganisms tested.Better activity can be observed against the Gram-positive bacteria S. aureus, and the difference between the fabrics functionalized with chitosan is clearly visible.When chitosan was applied to PET fabric or enzyme-hydrolyzed PET fabric, there was no zone of inhibition. However, when PET fabric was activated by accelerated alkali hydrolysis with HDT-MAC, an inhibition zone was observed, which persisted even after chitosan functionalization.The reason for this may be in the HTDMAC used for activation, which also acts as an antimicrobial agent.Since the functionalization with chitosan was performed directly after hydrolysis and only one rinse cycle was performed in between to remove the oligomers, it is possible that a certain amount of HDTMAC remained adsorbed on the fabric surface, as suggested by the IEP of PET_H fabric, and provided additional antimicrobial activity. The antibacterial activity achieved is maintained after the first and tenth washing cycles, but the effect against microfungi is removed after the first one for PET_Ch and PET_H_Ch fabrics.For enzyme-hydrolyzed fabric (PET_ALA_Ch), antimicrobial activity against microfungi remains after the first washing cycle, indicating a sufficient amount of chitosan in the fabric structure, as suggested by measuring the zeta potential. Conclusions In this article, the bio-innovative modification of PET fabric using enzymes and chitosan was investigated.Alkaline and enzymatic hydrolysis were carried out to activate the surface of polyester fabric, which was subsequently functionalized with chitosan.The presented results indicate the effectiveness of both hydrolyses.The alkaline hydrolysis led to a loss of strength, while the mechanical damage to the PET fabric was not detected by the action of the enzyme.Enzymatic bioactivation of the surface is also more environmentally friendly as it does not require sodium hydroxide and can be carried out at lower temperatures.The presence of chitosan on the polyester fibers was confirmed by measuring the zeta potential and is visible on the SEM images.In addition, the chitosan functionalization on the bioactivated surface showed better durability after 10 washing cycles, and the antibacterial activity achieved is maintained after the first and tenth washing cycles.In contrast to other environmentally friendly activation processes, such as plasma, this process can easily be introduced in any textile factory. Figure 1 . Figure 1.Protonation of chitosan amino groups in acidic medium. Figure 1 . Figure 1.Protonation of chitosan amino groups in acidic medium. Figure 3 . Figure 3.The process line diagram. Figure 3 . Figure 3.The process line diagram. Figure 4 . Figure 4. Tensile index (TI) of PET fabrics before and after hydrolysis, chitosan functionalization and 1 and 10 washing cycles. Figure 4 . Figure 4. Tensile index (TI) of PET fabrics before and after hydrolysis, chitosan functionalization and 1 and 10 washing cycles. Figure 5 . Figure 5. Wicking rate (W) of PET fabrics before and after hydrolysis, chitosan functionalization and 1 and 10 washing cycles. Figure 5 . Figure 5. Wicking rate (W) of PET fabrics before and after hydrolysis, chitosan functionalization and 1 and 10 washing cycles. , functionalized and washed PET fabrics' FTIR spectra are shown.The characteristic bands for PET fabric shown in Figure 6 are as follows.The signal at 2960, 2920 and 2850 cm −1 indicates symmetric C-H stretching.The intense signal at 1710 cm −1 and 1471 cm −1 is due to the stretching of the ester-carbonyl bond (C=O stretching).The signals at 1451 and 1339 cm −1 indicate the stretching of the C-O group, the deformation of the O-H group and the bending and wagging modes of the ethylene glycol segment.The signals at 1575, 1505 and 722 cm −1 indicate vibrations within the C=C group of the benzene ring.The characteristic signals at 1241, 1094 and 1017 cm −1 are the result of C-O, C-O-C and C-OH vibrations, respectively.The peaks at 871 and 846 cm −1 are associated with vibrations within the p-substituted benzene ring[30,31,[49][50][51][52].In Figure6, the FTIR spectrum of chitosan powder (Ch) with its characteristic signals is also shown.The intense signal at 3352 and 3283 cm −1 is due to O-H stretching vibrations of hydroxyl groups (-OH) superimposed on symmetric and asymmetric stretching vibrations of N-H bonds. Figure 6 . Figure 6.FTIR analysis of untreated polyester fabric (PET) and chitosan powder (Ch) used for this research. Figure 6 . Figure 6.FTIR analysis of untreated polyester fabric (PET) and chitosan powder (Ch) used for this research.The signal at 2869 cm −1 can be attributed to symmetric and asymmetric C-H stretching.The signals at 1644, 1588 cm −1 and 1327 cm −1 correspond to the carbonyl group (C=O) (amide I), the bending of the N-H bond (amide II) of the primary amino group and the C-N stretching of amide III in chitosan.A characteristic signal of the CH-OH bond is seen at 1418 cm −1 , and the CH 2 -OH bond appears at 1375 cm −1 .The signal at 1150 cm −1 corresponds to the asymmetric stretching of the C-O-C bond, and the signals at 1059 and 1025 cm −1 are due to the presence of the stretching of the C-O bond [53].In Figures7-9, FTIR-ATR spectra of PET fabrics before and after hydrolysis, chitosan functionalization and one and ten washing cycles are shown.In Figures7-9, the changes in the PET fabrics pretreated with NaOH and enzymes and after several washing cycles are presented.The changes are visible by the increase in the peak at 2960 cm −1 and the decrease in the peaks at 2920 and 2850 cm −1 , which are caused by stretching within the CH 2 groups in the polymer.It is clear from the spectral curves presented for all samples analyzed that the samples treated with chitosan after the pretreatment procedures show no physicochemical changes compared to the pretreated samples, and it can be assumed that the amount of chitosan applied is too small to be detected by this method.Therefore, the samples were characterized by their zeta potential.The results are presented in Figures10-12. . . The PET_Ch sample exhibits ζ = −50.82mV at pH 8. which is more positive than unhydrolyzed PET fabric (difference of 20 mV).A zeta pot tial of ζ = 26.65 mV at pH 4.12 and an IEP of 4.97 indicate the presence of chitosan on surface.After the first washing cycle, there is no significant change in the zeta potent at pH 8.66, ζ = −50.18mV; at pH 4.79, ζ = 25.21mV; and IEP = 4.96, indicating the stabil of the treatment.After the tenth washing cycle, the fabric remains more positive, ζ = −30 mV at pH 8.23.However, the removal of chitosan from the surface of the fabric occurr and the IEP was 2.59, which is almost the same as the PET_10W sample.Alkaline-hydrolyzed PET fabric at pH 8.37 has ζ = −61.15mV and IEP = 2.62.T hydrolysis of polyester increases the number of surface-active groups, i.e., carbo groups, which leads to a more negative zeta potential in the alkaline electrolyte solut compared to non-hydrolyzed PET [56,57].The PET_H_10W sample has ζ = −55.88mV pH 8.53, while at pH 2.11, it has ζ = 1.35 mV, with an IEP of 2.15.This more positive z potential of 5 mV after 10 washing cycles was found in the alkaline electrolyte solut compared to the PET_H sample, probably due to additional pilling during washi which corresponds to loss of braking force.The PET_H_Ch sample has ζ = −37.67mV pH 8.61, while at pH 5.30, it has ζ = 3.22 mV, with an IEP of 5.4.Alkaline hydrolysis polyester led to the formation of a larger number of active groups to which chitosan w bound, resulting in a more positive zeta potential in the alkaline electrolyte solution a a shift of the IEP to a higher pH value compared to non-hydrolyzed polyester.The fi . The PET_Ch sample exhibits ζ = −50.82mV at pH 8.29, which is more positive than unhydrolyzed PET fabric (difference of 20 mV).A zeta potential of ζ = 26.65 mV at pH 4.12 and an IEP of 4.97 indicate the presence of chitosan on the surface.After the first washing cycle, there is no significant change in the zeta potential; at pH 8.66, ζ = −50.18mV; at pH 4.79, ζ = 25.21mV; and IEP = 4.96, indicating the stability of the treatment.After the tenth washing cycle, the fabric remains more positive, ζ = −30.12mV at pH 8.23.However, the removal of chitosan from the surface of the fabric occurred, and the IEP was 2.59, which is almost the same as the PET_10W sample.Alkaline-hydrolyzed PET fabric at pH 8.37 has ζ = −61.15mV and IEP = 2.62.The hydrolysis of polyester increases the number of surface-active groups, i.e., carboxyl groups, which leads to a more negative zeta potential in the alkaline electrolyte solution compared to non-hydrolyzed PET [56,57].The PET_H_10W sample has ζ = −55.88mV at pH 8.53, while at pH 2.11, it has ζ = 1.35 mV, with an IEP of 2.15.This more positive zeta potential of 5 mV after 10 washing cycles was found in the alkaline electrolyte solution compared to the PET_H sample, probably due to additional pilling during washing, which corresponds to loss of braking force.The PET_H_Ch sample has ζ = −37.67mV at pH 8.61, while at pH 5.30, it has ζ = 3.22 mV, with an IEP of 5.4.Alkaline hydrolysis of polyester led to the formation of a larger number of active groups to which chitosan was bound, resulting in a more positive zeta potential in the alkaline electrolyte solution and a shift of the IEP to a higher pH value compared to non-hydrolyzed polyester.The first wash led to a partial removal of chitosan, resulting in a leftward shift of the IEP to 4.81.However, the zeta potential at pH 8.59 did not change significantly, with ζ = −40.70mV, while at pH 4.54, it is ζ = 19.29 mV.The PET_H_Ch_10W sample has almost the same behavior as the PET_H sample; at pH 8.66, it has ζ = −60.49mV, while at pH 2.54, it has ζ = 3.79 mV, with an IEP of 2.7, which means that after 10 washing cycles, the chitosan has been completely removed from the surface of the hydrolyzed fabric.Enzymatically hydrolyzed fabric at pH 8.46 has ζ = −71.11mV, while at pH 2.08, it has ζ = −5.77mV, with IEP < 2, indicating a larger number of groups on the sample compared to the alkaline-hydrolyzed sample.In an acidic electrolyte solution, the enzymatically hydrolyzed sample is more electronegative than the untreated sample and also than the alkaline-hydrolyzed sample.After 10 washing cycles, the fabric also shrinks, which is reflected in a more positive curve at both alkaline and acidic pH values.For example, the PET_ALA_10W sample has ζ = −55.74mV at pH 8.78, ζ = 3.21 mV at pH 2.25 and an IEP of 2.35, which is very similar to non-hydrolyzed PET.The PET_ALA_Ch sample has ζ = −43.57mV at pH 8.52, ζ = 10.13 mV at pH 5.28 and an IEP of 5.59, indicating the presence of chitosan on the surface, as evidenced by both a more positive charge and a shift in the isoelectric points.After the first wash, the chitosan is also partially removed from the surface of the sample; as with the alkaline-hydrolyzed samples, the values at pH 8.31 are ζ = −50.51mV, while at pH 4.78, they are ζ = 5.78 mV, with an IEP of 4.88.The sample PET_ALA_Ch_10W has ζ = −62.86mV at pH 8.57, while at pH 2.79, it has ζ = 11.21mV, with an IEP of 3. Figure 13 . Figure 13.SEM micrographs of unhydrolyzed and hydrolyzed PET fibers in fabrics at a magnification of 2000×, after the modification and 10 washing cycles. Figure 13 . Figure 13.SEM micrographs of unhydrolyzed and hydrolyzed PET fibers in fabrics at a magnification of 2000×, after the modification and 10 washing cycles. Figure 14 . Figure 14.SEM micrographs of chitosan-functionalized PET fabrics at a magnification of 2000×, after the 1st and 10th washing cycle. Figure 14 . Figure 14.SEM micrographs of chitosan-functionalized PET fabrics at a magnification of 2000×, after the 1st and 10th washing cycle. Table 1 . Labels and treatment of the PET fabric. Table 1 . Labels and treatment of the PET fabric. Table 2 . Mass per unit area, breaking force and elongation of PET fabrics and their changes before and after hydrolysis, chitosan functionalization and 1 and 10 washing cycles. Table 3 . The antimicrobial activity of fabrics before and after hydrolysis, functionalization with chitosan, and 1 and 10 washing cycles.
12,122.6
2024-09-01T00:00:00.000
[ "Materials Science", "Engineering" ]
METHOD OF CALCULATION OF CURRENT OF THE GROUND FAULTS IN THE PARALLEL OVERHEAD TRANSMISSION LINES 110-220 KV The mutual induction between the phase wires of the different overhead lines which situate close to each other cause unbalanced redistribution of currents in the line wires. This leads to emergence of the outof-balance zero sequence current which affects negatively on the sensitivity of the zero-sequence current protection. It is impossible to estimate such out-ofbalance current by the means of the typical calculation programs for short circuit currents. This paper describes the method of “virtual” lines for an extra correction of the values of zero-sequence currents during the current ground faults happening in the overhead lines 110-220 kV. There is an example of using this method for three parallel overhead lines 220 kV passing close to each other. Introduction In the multiended overhead lines 110-220 kV which include multiwire overhead parallel line (OHPL) and OHPL with single or several branches there is a problem of taking in account the zero-sequence current while adjusting sensitive stages of zero-sequence current protection [1].This problem exists because of the high out-of-balance currents in the lines as much as in normal operating conditions and during the remote asymmetrical shortcircuits which happen by the reason of mutual induction between phase wires of one line or two separate lines.In these circumstances the values of currents and the distance between the wires play an important role [2].In the much loaded OHPL such influence might cause zero-sequence currents of several dozens of amperes [3].Out-of balance currents are approximately the same as the currents in the set points of sensitive functions (zones) of zero-sequence current protection.The set points are put out functioning to avoid their false work and the reliability of backup is reduced. The zero-consequence out-of-balance currents result the great difference between the real and calculated short-circuit currents of single-phase short circuits while using typical calculation programs [4].Finally, there is a requirement to accept set points of protection with a large safety factor to exclude non-selective operation, but sensitivity of protection is reduced. Experimental setup and study technique To consider the influence of the mutual induction of the triple-circuit transmission line on the phase currents flowing in it while fault regime the part of a network represented in the Fig. 1 has been taken for analysis.It has three parallel OHPL: W1, W2, and W3. The section consists of three OHPL which go close to each on the same routingand have the start points at one substation and the end points at another substation.This makes it possible to consider them as triple circuit line.All the lines have a metal lightning arrester grounded on each tower.Every line is divided for 7 segments where the distance between the wires is different. According to the analysis implemented with the program of calculation of the shortcircuit currents in electric Power Networks RTKZ 2.03 [5], for unbalanced conditions caused by ground short circuit of one of the phases of considered OHPL,there is no required accuracy in values of short-circuit currents because they do not correspond to those which were registered by recording emergency instrument at the fault moment.The essential of the main obstacle to achieve the needed result is the absence of an opportunity to set mutual induction between separate phases and impossibility to take into consideration the affection of grounded wire hawsers and unbalanced conditions of the load.These limits can be eliminated by the application of extra methods for more precise definition of the parameters of conditions. Virtual line Virtual line The inherent and mutual resistances of the equivalent circuit have been calculated during the research based on the geometrical parameters of the lines. It is offered to set the "virtual" lines to simulate the asymmetry caused by the mutual induction between not only the separate circuits (equivalent wires of circuits) but separate phase wire of the single line.In Figure 2 there is an explaining scheme of the lines having three "virtual" ones. The resistances of "virtual" lines are calculated by the following method.For example, let us assume that There also can be the mutual induction between the main circuit i W and "virtual" lines 1 i W and 2 i W . Thus, two "virtual" lines must be set in parallel of each line.There is another way of setting the "virtual" lines according to it one "virtual" line is set in parallel to basic line.The resistances of the "virtual" lines are defined by the means of the following method: The phase with the most current max wi I is defined; Then the resistance of the "virtual" line 1 wi z connected to this phase is defined according to the following formula: , ) 1 wi z -resistance of the direct and zero sequence of the "virtual" lines; ) 1 ( wi z -resistance of the direct sequence of the main line; k=3÷4.Smart Grids 2016 The currents have been calculated using the method confirm its efficiency because the error of the calculation is not more than 10 % meanwhile without using the method described above it might be about 20-30 %. Conclusion Thus, the proposed method to define the parameters of the "virtual" lines with the program RKTZ 2.03 which allows to calculate the modes of the network having numerous longitudinal-transversal asymmetries and the load lets to simulate the affect of the line's own mutual induction between the wires "phase-to-phase" and "phase-to-wire-lightning arrester" and of mutual induction of neighbouring circuits.At the same time, it allows to calculate accurately for the relay protection (10 % of error) the distribution of the zerosequence currents (phase currents) of the lines having significant mutual induction between circuits in the load-mode, which is hard to achieve by the means of the typical programs used in Russia to calculate the short-circuit currents. This paper has been prepared with the support of Ministry of the Education and Science of Russian Federation within the framework of the Federal Target Program "Research and development in priority areas of Russian scientific and technological complex for 2014-2020", the Agreement № 14.579.21.0083,PNIER theme "Development of technical solutions of distributed system of smart grid backup, proving increased reliability and survivability of electrical equipment" (a unique identifier RFMEFI57914X0083). .- In this case it is needed to set two parallel lines with the simulation of the open-phase mode in the line №1 (the failure of the phases B and C) and in the line №2 (failure of the phases A and C).The resistance of an extra line №1 1 wi z in the base of the main line can be defined the following way: the resistance of an extra line №2. An explaining scheme of the lines having three "virtual" ones to simulate an asymmetry. 3. Two phases with the minimal current are switched off.
1,572.2
2017-01-01T00:00:00.000
[ "Physics" ]
An unconstrained binary quadratic programming for the maximum independent set problem Abstract. For a given graph G = (V,E) the maximum independent set problem is to find the largest subset of pairwise nonadjacent vertices. We propose a new model which is a reformulation of the maximum independent set problem as an unconstrained quadratic binary programming, and we resolve it afterward by means of a genetic algorithm. The efficiency of the approach is confirmed by results of numerical experiments on DIMACS benchmarks. Introduction The maximum independent set problem (MIS) is one of the central combinatorial optimization problems and shown to be NP-hard [1]. This problem is relevant for many practical applications in computer science and operation research, and engineering [2], such as register allocation in a compiler, assigning channels to radio station, scheduling exam, graph coloring, and the reader collision problem [3]. Some studies were made on the basis of the greedy algorithms and tabu search [4,5], or using the intersection graph of an axis-parallel rectangles [6]. In [7] the authors proposed a method based on an improvement of the maximum independent set algorithm given by F. Glover in [8]. Other studies based on a method that utilizes the polynomially solvable critical independent set problem [9]. The unconstrained binary quadratic programming problem is to maximize (or minimize) the function: where Q = (q ij ) is an n × n matrix of constants and x is an n-vector of binary variables. This formulation has an ability to model a wide range of different problems on many areas as traffic management [10], machine scheduling [11], UBQP gives evidence to its relevance and effectiveness in the face of known problems by their complexity such c Vilnius University, 2012 as the set-partitioning problem [12], the set packing problem [13], the vertex coloring problem [14], and the linear ordering problem [15]. Given its NP-hard nature [1], various approaches have been proposed for solving this model using exact methods [16,17] and metaheuristic methods as memetic algorithms [18,19], scatter search [20], simulated annealing [21], adaptive memory approaches based on tabu search [22][23][24], and recently combination of GRASP and tabu search [25]. This paper presents an efficient method for solving the maximum independent set problem (MIS) via its modeling as a unconstrained binary quadratic programming (UBQP). This paper is organized as follows: we define the maximum independent set problem in Section 2. Section 3 presents the transformation of linear programming to unconstrained binary quadratic programming associated to the maximum independent set problem. In Section 4 the ingredients of our algorithm are described, including an adapted genetic algorithm, and Section 5 draws some conclusions. Problem definition The maximum independent set problem (MIS) may be written in a form of a linear problem with binary variables as follows: where x 2 i = x i and x i ∈ (0, 1). The problem (LP MIS ) is constituted of a linear objective function with one types of constraints x i +x j ≤ 1 and the number of these inequality constraints is equal to card(E). Our purpose is to reformulate the problem (LP MIS ) in a binary quadratic problem without constraints in the form: where Q is a square symmetric matrix of dimension card(V ). For this we apply a transformation on the constraints set. Transformation We introduce the constraints in the objective function in the following way. The objective is transformed under shape x T Dx by (1), then the constraints x i + x j ≤ 1 are introduced by the product x i x j . The problem (LP MIS ) is thus replaced by the quadratic problem without constraint: Example. We consider the undirected graph in Fig. 1, on this graph we obtain three independent sets: C 1 = (2,4,5,7,9), C 2 = (1, 6, 8) and C 3 = (3). We reformulate it into an unconstrained binary quadratic programming problem. The example satisfies the following linear programming: x binary. We apply the transformation and obtain: The independent set C 1 = (2, 4, 5, 7, 9), i.e., x = (010110101) gives an optimal solution for UQP MIS with x 0 = 5, if we add or remove one or more vertex to C 1 the value of UQP MIS will be inferior, for example if we add to C 1 the vertex 3, i.e., x = (011110101) we find x 0 = −2. Solving UBQP MIS After having transformed our MIS problem in an unconstrained binary quadratic form, we try to solve it using an adapted genetic algorithm well by choosing aptly the operators which will be ideal to the MIS problem. www.mii.lt/NA Genetic algorithm Our resolution approach of UBQP MIS problem is based on genetic algorithm (GA). The GA is a research process based on the laws of natural selection and genetics. Generally, a GA consists of three simple operations. Their functioning is extremely simple, we start with an initial population, we evaluate the performance of each individual, and we create a new population of potential solutions using evolutionary operators: selection, crossover and mutation, GA cycle is repeated until a desired stopping criterion is reached. The individuals of initial population are generated randomly with equal probability such as the genes are assigned a value of 0 or 1. To build a diversified initial population, each individual added to the population must be different to all the existing solutions of the population. Using the best known lower bound based on degrees of vertices d i given by Caro and Tuza [26]: During the initialization of the population we fix the size of the independent set to p = i∈V 1/(d i + 1) that is, the number of 1 in each individual of the population. We consider the objective function x T Qx as evaluation function (fitness) of each individual in the population. For the selection, first, we randomly choose two individuals, then we apply the tournament selection operator in order to keep the best individual. The comparison between two individuals is carried out according to their fitness. Crossover is a recombination operator that combines parts of two individual parents to produce offspring that contain some genetic information from both parents. A probability parameter p c , is set to determine the crossover rate. We opt for a special crossing, we choose two random integers r, s in each parents P 1 and P 1 such as r, s ∈ [1, n] who represent points inter-genes. We opt for a crossing which aims to permute two blocks of genes of each parents pair. The two selected blocks have the same number of genes which are worth one and the same size. By this crossing we always keep the number of 1 on each new individual obtained. Table 1 presents a comparison between the results obtained using a 1-point crossover, a 2-point crossover and the proposed crossover for 25 tests. The tests were performed for a crossover rate equal to p c = 0.8 and a mutation rate equal to p m = 0.2. The average values (X avg ) solutions obtained with the proposed crossover are clearly superior compared to the two types of crossover. The mutation is used with the aim to further explore the search space and reaching solutions that the crossing cannot touch them. We choose randomly two distinct genes that we permute them after, usually with small probability p m . Table 1 presents a comparison between the results obtained using a 1-point crossover, a 2-point crossover and the proposed crossover for 25 6 Figure 2: Used crossover operator. Table 1 presents a comparison between the results obtained using a 1-point crossover, a 2-point crossover and the proposed crossover for 25 6 Figure 2: Used crossover operator. Table 1 presents a comparison between the results obtained using a 1-point crossover, a 2-point crossover and the proposed crossover for 25 6 Figure 2: Used crossover operator. Table 1 presents a comparison between the results obtained using a 1-point crossover, a 2-point crossover and the proposed crossover for 25 6 Fig. 2. Used crossover operator. Remark. We fix p = i∈V 1/(d i + 1) , then we apply the transformation to obtain UBQP MIS problem, after we run the genetic algorithm, and each time we find a maximum independent set we increment the number p by one in order to find an independent set larger. Obtained results In this section, we present the experimental results after running our algorithm on instances available in the literature (see the web page http://mat.gsia.cmu.edu/ COLOR03/ for a complete description of the instances) and compare them with those given by [7,9]. The platform of our experiments is a personal computer windows 7, AMD Athlon(tm) X2 dual-core QL-65 (2cpus) 2. GHz with 4 GB RAM. The results obtained by our algorithm are reported in the Table 2. For each instance, we indicate the number of vertices n, the number of edges d 0 , the values X S , X DBG and X UBQP respectively denote the cardinal of the maximal independent set given by S. Butenko et al. [9], DBG algorithm [7], and UBQP MIS algorithm, the symbol '-' means that the information is not available. The parameters of the GA for the UBQP MIS problem are: population size= 100, crossover rate p c = 0.8, mutation rate p m = 0.2, and the maximum iterations number T max = 200. Conclusion In this paper we have presented a method to reformulate the linear program of maximum independent set problem as an unconstrained binary quadratic problem, the proposed algorithm proves to be highly effective in solving a range of benchmark instances from the literature. Most of these instances have been easily resolved with a reasonable execution time. For the first 13 instances we obtained identical results than those in [9], our results are also identical to the first 21 instances given in [7], to show the effectiveness of our algorithm we tested 15 other instances.
2,322.6
2012-10-25T00:00:00.000
[ "Mathematics", "Computer Science" ]
Photoactive composite films prepared from mixtures of polystyrene microgel dispersions and poly(3-hexylthiophene) solutions Whilst polystyrene microgels belong to the oldest family of microgel particles, their behaviours when deposited onto substrates or prepared as composites have received little attention. Because polystyrene microgels are solvent-swellable, and inherently colloidally stable, they are well suited to form composites with conjugated polymers. Here, we investigate the morphology and light absorption properties of spin coated composite films prepared from mixed dispersions of polystyrene microgels and poly(3-hexylthiophene) (P3HT) for the first time. We compare the morphologies of the composite films to spin coated microgel films. The films were studied using optical microscopy, SEM, AFM, wide-angle X-ray diffraction and UV-visible spectroscopy. The films contained flattened microgel particles with an aspect ratio of B10. Microgel islands containing hexagonally close packed particles were evident for both the pure microgel and microgel/P3HT composite films. The latter were electrically conducting. The composite film morphology was dependent on the microgel and P3HT concentration used for film preparation and a morphology phase diagram was constructed. The P3HT phase acted as an electrically conducting cement and increased the robustness of the films to solvent washing. The composite films were photoactive due to the P3HT component. The absorbance for the films was tuneable and increased linearly with both microgel and P3HT concentration. The results of the study should apply to other organic swellable microgel/conjugated polymer combinations and may lead to new colloidal composites for future optoelectronic applications. Introduction 2][3][4][5][6] For non-aqueous microgel dispersions, which are the subject of the present study, colloidal stability arises from a combination of a negligible effective Hamaker constant and steric stabilisation. 2 Microgels have excellent film forming properties 7 and have been used in surface coatings. 8This report focuses on polystyrene microgel films and composites.Whilst films of water-swellable microgels have been well studied, [9][10][11][12][13][14][15] there is a surprising absence of reports for polystyrene microgel films in the literature.There is only one report to our knowledge that provides any information about polystyrene microgel deformation. 16t is not clear whether non-aqueous microgels flatten to the same extent as their water based anologues.The rules governing nonaqueous microgel film formation have not been established, especially in the context of composites.There was some TEM evidence from an early study that polystyrene microgel particles deposited from ethylbenzene formed islands with a tendency toward hexagonal close packing. 16This observation is potentially useful because highly ordered particulate films have application in photonics. 17,18Here, we explore microgel ordering in films by first investigating the morphology of spin coated polystyrene microgel films.We hypothesised that the processes responsible for formation of the crystalline islands would survive inclusion into composite films provided the added polymer did not cause microgel aggregation.We investigate this hypothesis by preparing microgel/poly(3-hexylthiophene) (P3HT) composite films and studying their morphology.P3HT was selected as a linear polymer for this study for three reasons.Firstly, P3HT and linear polystyrene composites can be prepared using solution methods 19 which indicates the two polymers have some miscibility.Secondly, P3HT has a higher electron density than polystyrene which was expected to increase the contrast between the two polymers for techniques such as SEM.Finally, because P3HT is photoactive 20,21 its combination with colloidal films may lead to new optoelectronics applications. Polystyrene microgels are one of the oldest reported classes of microgels. 22However, it has received little attention compared to water-swellable microgels.Polystyrene microgels are usually crosslinked with divinylbenzene (DVB). 16,22They are easily prepared using surfactant-free emulsion polymerisation (SFEP), 2,23 which is a scalable process.Furthermore, it is easy to tune polystyrene microgel particle size and degree of swelling using preparation conditions. 24hilst latex films 25 as well as blends of conjugated polymers with linear (insulating) polymers have been well studied 19,26 the present work is the first to report composite films prepared from mixtures of solvent-swellable microgel particles with conjugated polymer solutions to our knowledge. 15,[27][28][29][30][31][32] Schmidt et al. investigated spin coated poly(N-isopropylacrylamideco-acrylic acid) (poly(NIPAM-AA)) microgel particles and showed that the particles were flattened. 10Wellert et al. investigated thermally-responsive acrylate-based microgel particles and also found that they were flattened. 15They also showed that the extent of thermally-triggered volume decrease was lower for the deposited particles compared to the dispersed particles.Lyon's group has published a series of papers where monolayers containing laterally compressed water-swellable microgels were studied. 9,13,14,28However, little attention has been paid to films prepared from mixtures of microgels and linear polymer.Furthermore, reports of films containing microgel mutli-layers formed in one spin coating step are scarce.The only study that has reported deposited solvent-swellable microgels to our knowledge is that by Saunders and Vincent 16 which noted that polystyrene microgel particles flattened when deposited onto SEM grids from ethylbenzene.In the latter work aspect ratios were not reported and films were not prepared. The approach used to prepare microgel/P3HT films in this work is depicted in Scheme 1. P3HT solutions and polystyrene microgel dispersions were mixed and then films deposited by spin coating.Ethylbenzene is a good solvent for polystyrene 16,22 and was used in this study.The concentrations of P3HT used here were sufficiently low that the microgel/P3HT mixtures were colloidal stable.The latter simplified interpretation of the film morphologies and enhanced property control because the distribution of deposited particles within the composite films was not dominated by attractive inter-particle interactions. Here, we present a study of photoactive films comprising microgel particle/conjugated polymer blends.First, pure spincoated microgel films are examined to establish the microgel concentration above which the film morphology changes from that of a particle monolayer to a multilayer film.The effects of microgel and P3HT concentration on the morphology of microgel/ P3HT composite films are then investigated and a morphology phase diagram constructed.The electrical conductivity and light absorption properties of the films are also studied.Several pure and composite films are shown to contain islands of crystalline microgel particles.The data reveal that P3HT behaves as an electronically conducting cement and increases the robustness of the composite films to washing with ethylbenzene.The results of this study should be generally applicable and may enable construction of microgel/conjugated polymer composites for future optoelectronic applications. This journal is © The Royal Society of Chemistry 2015 (98%, Aldrich), ethylbenzene (Fisher 99.8%) were all used as received.Doubly-distilled deionised water was used for microgel synthesis. Preparation of polystyrene microgel dispersion The method used to prepare the polystyrene microgel was based on earlier work. 16The polystyrene microgel particles were first prepared in latex form in water using SFEP.Briefly, water (265 ml) was adjusted to a pH of 9 using aqueous NaOH solution (1 M) and then added to a 500 ml reaction vessel and stirred at 350 rpm which was heated to 70 1C.An initiator solution of 4,4 0 -azobis(4-cyanovaleric acid) (0.244 g, 0.871 mmol) in water, with a pH adjusted to 11.0, was prepared.DVB (0.086 g, 0.661 mmol) was mixed with styrene (28.6 g, 0.275 mol) and added to the vessel.The initiator solution was then added with stirring and the polymerisation was allowed to proceed for 3.5 h under nitrogen.The latex was subjected to repeated centrifugation and re-dispersion using water.After freeze-drying, the microgel particles were redispersed in ethylbenzene in their microgel form.The microgel had a nominal composition of 99.7 wt% styrene and 0.3 wt% DVB. Preparation of spin coated films Pure microgel and P3HT films are identified as Mx and Py, respectively.Accordingly, M3.0 and P0.8 correspond to films deposited from a microgel dispersion or a P3HT solution containing 3.0 or 0.8 wt% solid, respectively.The composite microgel/P3HT films studied here are identified in terms of the concentrations of microgel and P3HT used for their preparation, i.e., MxPy.The M3.0P0.8 film was prepared from a mixed dispersion containing a microgel concentration (x) of 3.0 wt% and a P3HT concentration (y) of 0.8 wt%.The film was prepared by dissolving P3HT (2.0 mg) in ethylbenzene (98 mg) at 70 1C.Separately, lyophilised microgel (5.0 mg) was dispersed in ethylbenzene (95 mg) at 70 1C.Then P3HT solution (40 mg) was added to microgel dispersion (60 mg) at 70 1C.The mixed dispersion was sonicated at 70 1C until it became translucent.The mixed dispersion was rapidly added (dropwise) to a clean and dry glass slide and spin coated at 3000 rpm for 15 s using a Laurell WS-650 Mz-23NPP spin processor.(The glass microscope slides were extensively cleaned using a standard hydrogen peroxide based procedure. 33) All of the other MxPy films were prepared by the same method as described above using the proper concentrations. Physical measurements Dynamic light scattering (DLS) measurements were carried out using a 50 mW He/Ne laser operated at 633 nm with a standard avalanche photodiode and 901 detection optics connected to a Malvern Zetasizer Nano ZS90 autocorrelator.Surface profilometry measurements were conducted using a Stylus Profilometry Dektak 8 (Bruker).Films were scratched and then the height difference between the film and substrate were measured.Optical microscopy was conducted with an Olympus BX41 microscope and white transmitted light.Fractional coverage values were determined using Image J. SEM measurements were obtained using a Philips FEGSEM instrument.The accelerating voltage was 5 kV and the samples were coated with Pt.Number-average diameters (D SEM ) were obtained by counting at least 100 particles.Atomic force microscopy (AFM) images were obtained using an Asylum Research MFP-3D operating in tapping mode.Imaging was performed using Olympus high aspect ratio etched silicon probes (OTESPA) with nominal spring constant of 42 N m À1 (Bruker AXS S.A.S, France).Cantilever oscillation frequency varied between 300 and 350 kHz and was determined by the auto-tune facility of the Asylum software, as was the drive amplitude.The set-point was adjusted to just below the point at which tip-sample interaction was lost to minimise sample damage.Wide-angle X-ray diffraction (WAXD) was performed using a PANalytical X'pert diffractometer.Out-of-plane diffraction patterns were measured using Cu K a radiation.UV-visible spectra were obtained using a Hitachi U-1800 spectrophotometer.The electrical conductivity measurements were performed by Jandel multi-height four point probe station with cylindrical tungsten carbide four probe head (spacing 1.00 mm).The measurement results were recorded using a Keithley 2440 multimeter and a current of 10 mA. Polystyrene microgel particle characterisation and spin-coated microgel film morphology The microgel particles were prepared using SFEP following established methods. 16The particles had a z-average diameter (d z ) measured by DLS of 725 nm (Fig. 1a).The particles swelled when re-dispersed in ethylbenzene (and became microgels) and the d z value increased to 1270 nm (Fig. 1b).An SEM image for the particles deposited from water in their collapsed (latex) form shows that low polydispersity particles were prepared (Fig. 1c).The number-average diameter for the particles deposited from water, D SEM(w) , was 635 nm and the coefficient of variation (CV) was 7.9%.The value for D SEM(w) is smaller than the d z value for the particles dispersed in water.This difference is because d z values are more strongly affected by the larger particles within a size distribution than number-average diameters determined by SEM.It is also possible that there were some aggregates present for the aqueous latex dispersions.This study focussed on the microgel particles dispersed in a good solvent (ethylbenzene), which gave dispersions that are inherently stable. 2 As discussed below the microgel particles dispersed well in ethylbenzene. The microgel particles showed a difference in their tendency to cluster during drying.The electrostatically stabilised latex particles tended to stay dispersed or form small clusters (Fig. 1c and d); whereas, the microgels deposited from ethylbenzene formed monolayer islands that were highly ordered and were effectively nanocrystals (see also Fig. 1e and f).This observation confirms an earlier TEM image. 16The present SEM data show clearly that extensive ordering occurred.It is well known that ordered close-packed 2D arrays of particles can be formed in evaporating solvent films by the process of convective assembly. 34olvent evaporation from regions of already-ordered array (or small nucleus or ordered particles) at the edges of drying solvent droplets generates a hydrodynamic flux, pulling further particles into the array for the present microgels, the intermicrogel repulsive interactions were short range (due to steric repulsion) which enabled the swollen particles to come into close contact without aggregation.This process enabled them to rearrange to form more ordered clusters.By contrast the latex particles deposited from water were not swollen, had longrange electrostatic interparticle repulsion and were less able to form ordered clusters. The deposited microgel particles had a strong tendency to give flattened interfaces when in contact with neighbouring particles (Fig. 1f).The tendency for deposited water-swellable microgel particles to deform at surfaces is well known. 9,10,15ig. 1f provides evidence that the polystyrene microgels were also able to deform extensively when deposited.The numberaverage diameter for the microgel particles deposited from ethylbenzene, D SEM(eb) , was 895 nm (CV = 9.2%).The latter diameter was much larger than the latex diameter (D SEM(w) = 635 nm) which is attributed to particle flattening that occurred during deposition of the swollen particles. Prior to studying spin coated microgel and microgel/P3HT films it was important to determine the concentration of microgel particles required to achieve full coverage of the glass substrate.Optical micrographs were obtained for spin coated microgel films using concentrations in the range of 0.5 to 5.0% (Fig. 2).As the microgel concentration increased the microgel islands (Fig. 2a) became larger (Fig. 2b) and interconnected (Fig. 2c and d).The morphology apparent in Fig. 2b is similar to that reported by Wellert et al. for their spin coated ethylene glycol based microgels. 15Once the microgel concentration reached 3.0% gaps were no longer apparent (Fig. 2e and f).(The white points in Fig. 2e and f were isolated (small) microgel clusters on top of the microgel monolayer.)The fractional surface coverage of the substrate was calculated from the optical images and are shown in Fig. 2g.It follows that x = 3.0% corresponded to the critical microgel concentration for full coverage.A large area SEM image of the M3.0 film (which covers an area of about 100 Â 100 mm 2 ) is shown in Fig. S1 (ESI †) and confirms that complete coverage occurred.Continuous macroscopic microgel particle films could be prepared using x 4 3.0%.A demonstration that a coherent film was prepared can be seen in Fig. S2 (ESI †) where an optical micrograph for a scratched M3.0 film shows intact fragments.The latter observation implies that the polystyrene chains of neighbouring microgel particles had interpenetrated.The interpenetration of microgels at surfaces and interfaces is a controversial topic.However, recent reports have indicated that the phenomenon is widespread. 35,36We propose that the absence of significant electrostatic repulsion between the microgel particles aided interpenetration of the peripheral chains during solvent evaporation. To enable the height of the flattened microgel particles to be estimated an AFM tapping mode image and line profile were obtained for M0.5 (Fig. 3a and b).The average peak-to-trough height from the line profile was B130 nm.The inset of Fig. 3b shows a line profile for a representative particle with the height and distance drawn at the same scale.These data confirm that the microgel particles flattened considerably.A perspective image (Fig. 3c) shows that all the particles had a flattened morphology.The aspect ratio for the microgel particles (=lateral diameter divided by particle height) was B10.This value is similar to that reported for poly(NIPAM-AA) microgels. 10Deposited viscoelastic vinylacetateco-ethylene polymer particles have also shown comparable aspect ratios to that reported here. 37The value is much smaller than the value of at least 300 reported for highly deformable ''self-crosslinked'' poly(N-isopropylacrylamide)-based microgel particles. 28However, the latter particles were specially designed to have ultralow crosslinking.Our data show that the flattening of deposited solvent-swellable microgels is comparable to that reported for water-swellable microgels. To further probe the microgel particle film (Mx) morphologies they were examined using SEM as a function of x (Fig. 4a-h).These images confirm that the coverage increased with x and reached a monolayer at x = 3.0% (cf.Fig. 2g).The particle clusters present for M0.5 (Fig. 4a and b) and M1.0 (Fig. 4c and d) formed 2D crystals.For all of the 2D crystals in this work the average number of nearest neighbours for each particle was six and hexagonal close packing was evident.Hexagonally close packed particles are highlighted in Fig. 4a and d.Fig. S3 (ESI †) shows a large area SEM image for the M1.0 film and many islands of 2D crystals can be seen.However, for the M3.0 (Fig. 4e and f) and M6.0 (Fig. 4g and h) film the particles were not able to form large 2D crystals.(The FFT insets of Fig. 4e and g show that the films were not ordered into an array.)Thus, 2D crystals were formed for the spin coated when the particle concentration used was less than 3.0%.Higher magnification SEM images for the Mx films (second row of Fig. 2) enabled estimates of the deposited microgel particle diameters.The values are plotted as a function of x in Fig. 4i (black diamonds).These values are much larger than the diameter for isolated (spherical) particles dispersed from water (D SEM(w) = 635 nm).Hence, the microgel particles had flattened for all of the films.The D SEM(eb) values for the M0.5 and M1.0 film from Fig. 4i can also be compared to the D SEM(eb) value of 895 nm obtained from microgel particles dried in air on SEM stubs (Fig. 1e and f).It follows that spin coating introduced an additional contribution to flattening for the microgel particles, presumably as a result of the increased shear present. 38he thicknesses of the M1.0, M3.0 and M6.0 films were measured using profilometry (Fig. 4i, black squares).The thickness values measured for the M1.0 and M3.0 films of 110 and 143 nm, respectively, are not significantly different to the height of 130 nm measured by AFM for M0.5 (the latter data point is also shown in Fig. 4i).By contrast, the thickness for the M6.0 film (505 nm) was much larger.The D SEM(eb) values for M3.0 and M6.0 were similar (Fig. 4i) and appear to have reached plateau values.Therefore, the much greater film thickness for M6.0 at near constant D SEM(eb) value is attributed to multi-layer formation.Restrictions of lateral particle flattening were imposed by neighbouring particles when x 4 3.0% which compressed the swollen particles during deposition.For M6.0 multilayer formation occurred because microgel particle swelling pressure reached a critical value and further compression was more energetically demanding than multilayer formation.If it is assumed that the flattened microgel particle thickness was constant (B130 nm) the M6.0 film would have contained B4 microgel layers. The changes of particle packing and morphology with x are depicted in Fig. 4j.For x o 3.0% (less than full coverage) monolayer islands of nanocrystals formed and became larger as x increased.Once x reached 3.0% complete coverage occurred and the microgel particles became kinetically trapped without the opportunity to rearrange to produce extensive 2D crystalline arrays.The high number-density of neighbours decreased the average microgel diameter.With greater x values (e.g., 6.0) the particles formed multilayers.It is noted that spin coating differs considerably from the centrifugal method developed by Lyon et al. for increasing particle concentration whilst preserving monolayer coverage. 14With spin coating the particles are not as strongly forced to retain a monolayer geometry and are subjected to less lateral compression.Consequently, they can form multiple layers at lower energetic cost.Spin coating is well suited to forming large area films and is scalable. Morphologies of spin-coated microgel/P3HT composite films We next investigated microgel/P3HT films.Whilst P3HT was soluble in ethylbenzene heated to 70 1C at concentrations of 0.8-3.2%, the solubility decreased as the temperature approached room temperature and thermoreversible gels formed.A similar observation was reported elsewhere 39 for P3HT dissolved in xylene, which is an isomer of ethylbenzene.Here, mixed dispersions of microgel and P3HT were compatible at 70 1C without evidence of aggregation.Rapid spin coating of heated mixtures (Scheme 1) provided macroscopically smooth composite films which strongly absorbed light.Fig. 5 shows the compositions of the mixed microgel/P3HT dispersions used to prepare the MxPy composite films.Optical micrographs for the films are also shown.In addition, the average film thickness is shown next to each symbol.The optical micrographs for the composite films investigated showed that they were heterogeneous at the micrometre length scale and microgel particles can be seen for many of the images, especially for the films with lower x and y values. It can be seen from the micrographs that the composites that exhibited the most uniform dispersion of microgel particles were M1.0P0.8 and M3.0P0.8.The composite films with the highest y values (e.g., M1.0P1.6 and M3.0P1.6)exhibited micrometre scale phase separation with particle aggregates and darker, particle-free, regions evident.The composite films containing the highest x values (M4.5P0.8 and M6.0P0.8)exhibited evidence of sheets of microgel particles that were separated by darker P3HT-rich regions.The composite films in the top right hand corner of the diagram (blue region) had the greatest heterogeneity of the microgel and P3HT-rich regions.It follows that the best mixing of the two phases at the micrometre-scale was achieved using the intermediate concentrations of microgel and P3HT (M1.0P0.8 and M3.0P0.8).We will show later that these films also had monolayers of microgel particles. In order to obtain topographical morphological information AFM data were obtained for the M3.0P0.8 film.Fig. 6a shows that a nanoparticle aggregates were present between the microgel particles (indicated by an arrow).The line profile for the film (Fig. 6b) shows that the height of the microgel particles above the baseline was B70 nm.This value is much lower than the microgel height of about 130 nm determined from Fig. 3b for the M0.5 system (which did not contain P3HT).A maximum sketched with the same height and width scale (inset of Fig. 6b) shows that the maxima were broader compared to the M0.5 particles (inset of Fig. 3b).We propose that the valleys for M3.0P0.8 were the top of deposited P3HT.We term the latter nanometre-sized regions between the microgel particles as P3HT nanodomains.Consequently, the height of 70 nm is proposed to correspond to that of the microgel particles above This journal is © The Royal Society of Chemistry 2015 the surrounding P3HT nanodomains.If it were assumed that the flattened microgel particle thickness was B130 nm an estimate of the thickness for the P3HT nanodomains for M3.0P0.8 would be B60 nm.The latter value is about a factor of 7 greater than the thickness measured for a control P0.8 film (9 nm).Clearly, the presence of the microgel particles strongly affected the deposition of the P3HT phase during spin coating. The presence of the P3HT strongly affected the ability of the films to withstand solvent washing.Indeed, it was very difficult to remove microgel particles from the composite films when they were dipped and sonicated in ethylbenzene (a good solvent for polystyrene and P3HT).Fig. S4 (ESI †) shows images of M3.0P1.6 and M3.0 films after ethylbenzene soaking and sonication.Even when the ethylbenzene was heated to 70 1C and sonicated for extended periods, the microgel particles remained on the surface.However, the M3.0 film exhibited substantial loss of particles when dipped in ethylbenzene with minimal sonication.P3HT appeared to behave as a cement and locked the microgel in place.This proposal is supported by a phase image for the M3.0P0.8 film (Fig. 6c) which shows that the P3HT nanodomains filled gaps between the microgel particles.The perspective image shown in Fig. 6d shows clearly that the P3HT nanodomains filled gaps between the microgel particles across a wide region of the film. To determine the effect of P3HT concentration ( y) on the nanometre-scale morphology of the microgel/P3HT films an SEM study was conducted.Fig. 7a and b show images for the M3.0P0.8 film.The microgel particles were dispersed as islands that had crystalline packing.The extensive crystallinity within the microgel islands and can also be seen from Fig. S5 (ESI †), which shows a large area SEM image (B100 Â 100 mm 2 ).Furthermore, the surface coverage by the microgel particles was less than unity for M3.0P0.8.Both of these features contrast strongly to the situation for M3.0, which did not show extended crystallinity (Fig. S1, ESI †) and had a coverage of unity (above).These differences are due to P3HT diluting the microgel particles for M3.0P0.8.It is important to note that the presence of crystalline islands (Fig. 4b and Fig. S5, ESI †) shows that P3HT did not interfere with microgel self-assembly.This observation confirms that P3HT did not cause aggregation of the microgel particles.Formation of 2D crystals requires microgel particle rearrangement and this may occur on an experimentally meaningful timescale when the inter-particle attraction is comparable to kT.Photoactive composites with crystalline particles embedded within them could enable new applications as photonic or optoelectronic materials. SEM images were also obtained for composite films with larger x and y values.Images are shown for M3.0P1.6 (Fig. 7c and d) and M6.0P0.8 (Fig. 7e and f).These images reveal the presence of two P3HT environments.There were P3HT nanodomains (between the microgel particles) as well as microdomains (between islands of microgel particles).The latter are identified with arrows in Fig. 7a and c.The values for D SEM(eb) and the film thickness are shown as a function of x in Fig. 7g.The D SEM(eb) value decreased from 1070 nm for M3.0P0.8 to 715 nm for M6.0P0.8.This trend is due to an increase in the number of nearest neighbours for each microgel particle.The film thicknesses were about 120 nm for M1.0P0.8 and M3.0P0.8 which is close to 130 nm (height for M0.5).This finding strongly suggests that microgel monolayers were present.By contrast the thickness increased significantly for M4.5P0.8 and M6.0P0.8 (Fig. 7g).The thickness values for these films (4400 nm) were much larger than the height of the microgel particles.Accordingly, it can be concluded that M4.5P0.8 and M6.0P0.8 contained microgel multilayers.This trend is the same as that observed for the pure microgel films (Fig. 4i). The effect of y on the morphology and thickness was investigated using the M3.0Py series.The D SEM(eb) value for the M3.0P1.6 film was 805 nm, which was much smaller than the value for M3.0P0.8 (1070 nm).This result indicates restriction of microgel particle lateral spreading occurred.Presumably, an increase in P3HT solution viscosity occurred in regions between microgel particles upon cooling and this opposed microgel spreading.Whilst osmotic deswelling of polystyrene microgel particles by added non-adsorbed polymer is known 2 the P3HT concentrations used for the mixed dispersions were low (maximum of 1.6%) and this effect is not considered to have been significant.These results suggest that added linear polymer concentration can be used to tune the extent of microgel spreading within composite films, which is a new observation.The film thickness data (Fig. 7h) show that the thickness of the M3.0Py films increased for y 4 1.2.This thickness increase was significantly larger than from a linear combination of the thicknesses of each component (pure P3HT and microgel) and is most likely due to multilayer formation.The film thicknesses (189-225 nm) may indicate some inter-digitation of flattened microgel particles. From consideration of the data presented above depictions of the different composite morphologies are presented (Fig. 7i).It is proposed that addition of P3HT affected the film morphology by preventing the microgel particles from coming into close contact.Monolayer films with P3HT nanodomains and microdomains were present for M3.0P0.8.Furthermore, both P3HT domain types persisted when the multilayers state were formed.The morphological roles of P3HT were to (partially) oppose microgel spreading and increase the (average) separation between particles. The structural order of the P3HT phase at the Angstrom scale was probed by WAXD using out-of-plane diffraction for M3.0P1.6 and P1.6 films (Fig. S6a, ESI †).Data were also obtained for a M3.0 film as a control.Peaks were evident for M3.0P1.6 and P1.6 at 2y = 5.51, 10.51 and 23.01, which are assigned to primary (100), secondary (200) and face-to-face (010) stacking, respectively. 20,40The (100) peak originates from lamellae oriented perpendicular to the film plane 20 and was pronounced for M3.0P1.6 and P1.6.These data suggest that stacking of P3HT lamellae perpendicular to the film plane occurred for both the M3.0P1.6 composite and P1.6.It follows that the microgel particles enabled normal P3HT packing to be maintained on the Angstrom scale for the composites. The electrical conductivities for several microgel/P3HT films were also measured using the four-point probe method (Fig. S6b, ESI †).The conductivities, and hence charge transport abilities, for M3.0P0.8,M3.0P1.2 and M3.0P1.6 were similar to that for pure P3HT (i.e., the P1.6 film).These data suggest that charge transport pathways for the M3.0Py films, which are proposed to have involved inter-connected P3HT nanodomains and microdomains, were not significantly restricted by the microgel particles.The data support the view that the P3HT phase surrounded the microgel particles in the composite films.It follows that P3HT acted as electrically conducting cement for the microgel particles within the composites. An important question concerning the extent of microgel particle flattening within the composite films is what the extent of P3HT penetration within the microgel particles is in the dispersed state prior to spin coating.It is reasonable to expect that microgel particles containing P3HT would be less likely to flatten upon spin coating compared to pure microgel particles.Whether or not penetration occurs will depend in large part on the mesh size of the microgel particles and the hydrodynamic diameter of the P3HT chains in ethylbenzene at 70 1C.The number-average molecular weight range of P3HT used here was 25 000-35 000 g mol À1 based on supplier information (see Experimental section).These values may be considered as linear polystyrene equivalent molecular weights.An earlier study of microgels with the same composition as used here showed 16 that osmotic deswelling (due to exclusion of linear polystyrene chains) occurred when the number-average molecular weight of linear polystyrene exceeded 5450 g mol À1 .Consequently, the latter gives an approximate exclusion polystyrene molecular weight.Therefore, there is good reason to believe that the P3HT chains used here were too large to penetrate the interior of the swollen microgel particles in this study prior to film formation and were excluded. Light absorption properties of microgel/P3HT composite films In order to probe the light absorption behaviour of the composite films UV-visible spectra were measured as a function of x for MxP0.8 (Fig. 8a).The spectra show strong absorption from P3HT vibronic bands in the 500 to 600 nm region. 41There was also a contribution to the absorbance values in the 300 to 500 nm region from light scattering due to the microgel particles.The latter is evident by comparison of the spectra for the pure M3.0 and M6.0 films (Fig. 8a).The absorbance at 525 nm was plotted as a function of x (Fig. 8b) and the absorbance appears to be proportional to x.Interestingly, as x increased the absorbance for the composite films increased by a factor of 3 whilst the weight fraction of P3HT in the composites decreased by a factor of 8 (i.e., from a P3HT weight fraction of 1.0 for P0.8 to 0.12 for M6.0P0.8).Thus, diluting P3HT with microgel particles increased the absorbance from the P3HT component for these composites.Introducing P3HT in phaseseparated domains may strongly enhance light scattering since the bulk of the film now displays spatial refractive index variation on a length scale comparable to the wavelength of visible light.Additionally, the MxP0.8 films (x = 4.5 and 6.0) had multiple layers (discussed above) which increased the composite pathlength.However, the latter must be tensioned against the decrease in P3HT weight fraction (above).We propose that enhanced light scattering contributed to an increased efficiency of P3HT light absorption.By enhanced light scattering we mean the scattering of light from microgel particles caused an increased pathlength in the P3HT phase.The variation of x is a potentially useful method for tuning (and increasing) light absorption from P3HT within microgel/P3HT composite films.This approach should also apply for other conjugated polymer/microgel composites and could provide a useful method for increasing light absorption whilst minimising material use of conjugated polymers, which can be very expensive. The effect of y on the light absorption properties of M3.0Py films was also studied (Fig. 8c).The vibronic bands for P3HT were most pronounced for M3.0P1.2 and M3.0P1.6.The absorbance at 525 nm increased linearly with y for the M3.0Py films (Fig. 8d).The absorbance for the composite films was consistently higher than those for the, respective, pure Py films.The linear dependence of absorbance with y for the composites most likely originates from an increase in the P3HT thickness within the nanodomains and microdomains.These data show that tuneability of light absorption by composite microgel/ P3HT films can also be achieved using y.It is remarkable that the absorption of light by these new composite films can be independently increased in a tuneable manner by increasing the content of either component (microgel or P3HT). The absorbance data from Fig. 8b and d were plotted as a function of film thickness (from Fig. 7g and h) to probe the relationship between these two quantities (see Fig. 9).Caution must be applied when considering these data because the absorbance and thickness values are dominated by different parts of the composite, i.e., P3HT and the microgel particles, respectively.The data for both the M3.0Py and MxP0.8 films appear to show linear behaviour although the gradients differ.It is expected that the absorbance of photoactive film (e.g., P3HT composite) will be proportional to the film thickness based on Beer's law 42 and also experimental data reported for P3HT-based films. 43Here, the gradient for the M3.0Py films is highest because P3HT addition is the most direct method for increasing the P3HT pathlength.The gradient is lowest for the MxP0.8 systems because the absorbance of the P3HT phase increased due to the indirect (secondary) influence of the microgel particles.The latter increased the effective pathlength via enhanced light scattering as well as the overall film thickness of the P3HT phase due to multilayer formation. Conclusions In this study we investigated the morphology of spin coated films of pure polystyrene microgel and also microgel/P3HT composites.In contrast to other work, 10,15,28 the present films were prepared using solvent-swellable microgels.The deposited microgel particles were flattened with aspect ratios of about 10.In this regard, the polystyrene microgels behave in a similar manner to that reported for water-swellable microgels.For the pure microgel films the critical concentration at which full monolayer overage occurred was 3.0%.At higher concentrations multilayer films were produced.The pure microgel films and composite films contained microgel particle islands that were crystalline when the microgels were present as monolayers.The formation of the crystalline islands is likely to be general for spin coated films and composites prepared from solvent-swellable microgels.The microgel/P3HT composites were photoactive and electrically conducting.P3HT was shown to behave as an electrically conducting cement and increased the robustness of the composite when immersed in ethylbenzene.The microgel/P3HT films strongly absorbed light and this benefited from enhanced light scattering from the microgel particles within the matrix.Moreover, the absorbance for the composite films was tuneable and increased linearly both with microgel or P3HT concentration.For future studies solvents that dissolve both P3HT and polystyrene at room temperature should simplify film deposition and one candidate is chloroform.There is good reason to expect the results of this study will apply to other organic-swellable microgel/conjugated polymer composites.Highly ordered microgel/conjugated polymer composites which are also electrically conducting may provide new opportunities for future optoelectronic applications.For example, a microgel-based route is potentially available for hybrid polymer solar cell construction. 44This approach is possible because (a) polymer solar cells with good efficiencies have been prepared in the presence of polystyrene, 19 (b) polystyrene microgel particles have been fully infiltrated with CdSe nanocrystals using microgel particle mesh size control 4 and (c) the film thicknesses for the flattened microgel particles in this work are within the range used for hybrid polymer solar cells. 45Applications as photonic materials may also be possible if the extent of ordering can be increased in order to allow precise control over optical band gaps. Fig. 1 Fig. 1 Polystyrene microgel characterisation.(a and b) Show DLS data for the particles dispersed in water and ethylbenzene.SEM images for the particles deposited from water (c and d) or ethylbenzene (e and f) are also shown.Hexagonal close packing of deformed microgel particles is evident from (f). Fig. 2 Fig. 2 Effect of microgel concentration on surface coverage.Optical micrographs (a to f) and surface coverage as a function of microgel concentration for spin coated microgel films (g).The scale bars correspond to 20 mm. Fig. 3 Fig. 3 Morphology of microgel particles.(a) Shows a tapping mode AFM image for the M0.5 system.(b) Shows a line profile for the particles.The inset shows a line profile for one of the particles drawn to the same scale for the height and distance axes.(c) Shows a perspective image for the particles.The latter was processed by levelling scan lines using a first-order polynomial including only bare substrate areas, followed by Gaussian smoothing with a 10 pixel radius. Fig. 4 Fig. 4 Morphologies of spin coated microgel films (Mx).The film identities for (a) to (h) are given.The insets for (e) and (g) are Fast Fourier Transforms (FFT).(i) Variation of the diameter ( E) and thickness (') with x. (j) Depiction of proposed microgel particle packing.The scales for the top and middle rows are 10 and 2 mm, respectively. Fig. 5 Fig. 5 Composition and morphology phase diagram of the MxPy films.The black circles and blue squares indicate, respectively, films that had a monolayer or multilayer of microgel particles, respectively (see text).The numbers next to the symbols are the thickness (nm).Scale bars for main micrographs and insets are 20 and 2 mm, respectively. Fig. 6 Fig. 6 Morphology of M3.0P0.8 film.(a) Shows a tapping mode AFM image.(b) Shows a line profile from the image in (a).The inset shows a line profile for one of the particles drawn with the same scale for the height and distance axes.(c) Shows a phase image for the film.(d) Shows a perspective tapping mode image.The arrows show P3HT nanodomains (see text). Fig. 7 Fig. 7 Morphologies of microgel/P3HT films.The film identities for (a) to (f) are shown.The arrows in (a) and (c) show P3HT microdomains.(g) Variation of thickness (') and particle diameter ( ) with x for MxP0.8 films.(h) Shows the variation of thickness for M3.0Py (.) and Py films ( ). (i) Depiction of proposed particle packing for the composite films (see text).The scale bars for the top and middle rows are 10 and 2 mm, respectively. Fig. 8 Fig. 8 Effect of microgel and P3HT concentration on the light absorption properties of microgel/P3HT films.(a) Shows UV-visible spectra for MxP0.8 films.(b) Shows the variation of absorbance at 525 nm with x for MxP0.8 films.(c) Shows UV-visible spectra for M3.0Py films.(d) Variation of absorbance at 525 nm with y for M3.0Py and Py films. Fig. 9 Fig. 9 Dependence of microgel/P3HT film absorbance on film thickness.The thickness and absorbance data have been taken from, respectively, Fig. 7g and h as well as Fig. 8b and d.
8,904
2015-10-21T00:00:00.000
[ "Materials Science" ]
Integrated turbine-generators for hydropower plant – a review The paper provide a comprehensive review on water turbine integrated with electrical generator. The integration consists in combining the generator with the turbine in one device without the use of a gearbox. The practical implementations of these solutions date back to the 1980s and are more and more common. Such a solution gives a wide possibility of integration and reduction of costs of the hydro unit, however, due to certain technological problems, it is not widely available. This article classifies the different types of integration and analyses it from a technical and economic point of view. Practical implementations and operational problems were also shown. Introduction The article presents the technical possibilities of the integration of a water turbine with a generator. The focus was on water turbines with variable speed synchronous generators, the turbine module of which can be adjusted according to the flow and water level in the river. Turbines with a generator on the same shaft were deployed in several dams between 1940 and 1950 and are now regaining interest in Central Europe. The review covers existing solutions for integrating a water turbine with an electric generator. On the basis of the available scientific literature and practical knowledge, the properties of these solutions are presented. Particular attention was given to technical aspects and operational features. These analyses is the basis for proposing an integration solution concept that meets the design requirements, containing the beneficial features of the water-to-electricity conversion system with the elimination of known disadvantages and operational problems. The standard machine solution in low head hydroelectric power plants is a turbine connected via a gear increasing speed to an asynchronous generator (usually the squirrel cage induction machine). This solution is commonly used and exploited. A multielement energy generation system requires the construction of power plant buildings and their protection against flooding. The cost of hydrotechnical construction for low heads is 50-70% of the total investment. In the case of the lowest heads, the classic solution excludes the economic viability of the construction. A significant disadvantage of using a gearbox is the mechanical losses generated by the gearbox. The use of oils and greases also does not comply with the current legal regulations. A cheap * Corresponding author<EMAIL_ADDRESS>asynchronous generator in 75% of power operates with the rated efficiency, while its electrical losses significantly increase by up to 60% below. Advantages include the availability of spare parts and the simplicity of renovation and maintenance. L.F. Harza, who in 1919 patented [1] his own solution and made real prototypes in the 1: 1 scale of these machines, is responsible for the first Publisher correct integration of a water turbine with a generator. Their installation in hydrotechnical facilities, despite careful execution, did not bring success due to problems with moisture and damage to the windings of generators [2]. For many years, this solution has been the goal of various companies involved in the production and operation of rotating machines, consequently a number of studies and patents for the same device have been created around the world. The first tubular turbines were put into operation in 1936 in the Polish city of Rościno by the Swiss turbine manufacturer Escher Wyss (now Andritz Hydro). The development of rotating machinery has since led to advanced tubular turbine technology integrated into the generator. The compact structure of the machine can be ensured by placing the generator in a sealed casing (bulb) axially to the turbine (Fig. 1). Such solutions are most often based on a cage induction generator, the rotational speeds of which are high, and for low and the lowest heads it is necessary to use a gear that multiplies the revolutions. Placing the gear lubricated mostly with oil inside the bulb poses a risk of lubricating oil leakage and contamination of the river. Such a design of low-pitched low-power machines limits the flow cross-section and causes significant disturbances in the water flow in the hydraulic channels of the machine, which reduces the efficiency of the assembly. For small machines, there are also problems with the dampening of the generators placed inside the bulb when the machine is at a standstill. Depending on the version, there are different types of integrated tubular turbines, which differ mainly in the arrangement of the generator. There are many design solutions for integrating a water turbine and an electric generator in the literature. Due to the possibility of technical analysis, only the implemented solutions, i.e. those made and operated in real hydroelectric facilities, are presented. Full integration In order to reduce the dimensions of the machine and at the same time eliminate the problem of tightening the hydraulic section of the machine, it is to place a water turbine inside the rotor of the electric generator, so called full integration (Fig. 2). A synchronous generator with permanent magnets is used here due to the possibility of eliminating the gears and greater efficiency of the generator in a wide range of loads. The first such solutions appeared in the 1940s [4][5]. These were low power (<1MW) facilities and had problems with the sealing between the stator and rotor. The first facility with high power (100MW) containing 10 integrated hydro sets based on a propeller turbine with adjustable steering wheels was the power plant on the Rhein River (Laufenburg, Germany), established in 1987 [6]. Another solution was made by the Austrian company VA TECH HYDRO GmbH, called StrafloMatrix [7][8][9] and is intended for small hydropower plants (Fig. 3). Two units (300 kVA and 700 kVA) were made and installed at the Agonitz hydropower plant on the Steyr River in Austria. A similar solution for small hydropower plants was installed at the AKWA power plant on the Biała Głuchołaska river near Nysa (Poland) (Fig. 4) [10]. There are two parallel hydro units with a total power of 150 kW. The energy conversion system is also based on a propeller turbine and a synchronous generator with permanent magnets. In this solution, instead of the turbine rotor blade control system, a variable rotational speed was used through the use of a power electronic converter. This solution has been thoroughly tested and analysed by among others the authors of this study [11][12][13][14]. All the solutions described above relate to the integration of a water turbine inside a synchronous generator, hereinafter referred to as full integration. The advantages and disadvantages of such a solution are presented below. The significant advantages of integrating the turbine inside the rotor of the electric generator are the limitation of the dimensions of the hydro unit to the pipe segment as well as the lack of a drive shaft and gear (Fig. 5). Such a hydro-assembly can be placed directly in the pipe from which electric cables are led from the generator. Another significant advantage of this solution is the cooling of the electric generator by the water flowing inside its body. This ensures very good cooling conditions, which enables a significant long-term overload of the generator. This reduces the cost of making the generator. Fig. 6 shows an example of the temperature distribution of a generator integrated in a tubular structure [10] operating at a rated load. The maximum temperature of the stator housing is 36 ° C. This enables continuous operation with high efficiency, and makes it possible to resign from mechanical ventilation of the power plant unit. "Wet" Integration The main disadvantage of full integration are the physical phenomena related to the gap between the rotor (turbine rotor with an outer ring to which permanent magnets are attached) and the stator of the electric generator. Two solutions are used here: wet and dry gap. A wet gap, i.e. one that is filled with river water during normal operation and does not require special sealing. However, it is necessary to seal the surface of the permanent magnets of the rotor and the stator windings. The main disadvantage of this solution is the loss of mechanical power in the gap caused by friction of water against the surfaces of the stator and rotor. The turbulent movement of water and the transport of fine fractions of impurities together with the flowing water destroys the seals. The power losses of the turbine set based on the measurements of the actual hydro set amounted to even 10% of the generator rated power depending on the size of the slot and the rotational speed of the turbine. Analytical calculations verifying the wetfracture value are presented in the article [13]. It presents a relationship (1) (Table 1). The change of the Reynolds number value depending on the angular velocity does not significantly change the hydraulic resistance coefficient. It can therefore be concluded that the mechanical power losses in the gap depend on the rotational speed in the third power (Fig. 7). For the rated operation condition, these losses are significant and constitute 6.4% of the hydro set rated power. Contamination in the water present in the gap can significantly increase the mechanical wear, increasing the erosion of the sealing materials. The other disadvantage of the wet gap, apart from power losses, is the rapid wear of the rotor and stator seals when working with water contaminated with e.g. sand. The operation of the facility [10] showed the need to replace the sealing materials at least once a year. An example of the influence of contaminated water on the abrasion of the composite sealing the stator and rotor in a turbine integrated with a wet gap is shown in Fig. 8. Another significant disadvantage of this type of solution is the gradual dampening of the generator stator windings, which reduces the electrical insulation. This can result in an electrical short circuit in the stator windings. Stator insulation burnout due to an electrical short circuit resulting from poor electrical insulation in a turbine integrated with a wet gap is shown in Fig. 9. Furthermore, the integration of a dimensioned turbine in the rotor of a synchronous generator requires the design of the generator stator in each case. This causes increased costs of the hydro unit and makes it difficult to create a series of types of machines. Fig. 9.Burnout of the stator insulation due to an electrical short circuit resulting from poor electrical insulation in the turbine integrated in the wet gap. "Dry" Integration Dry gap between generator stator and rotor -i.e. one in which during normal operation there is a mixture of a small amount of water and air that does not cause significant power losses. This solution is achieved through the use of a special elastic sealing system. In the 1980s, a flexible seal dedicated to a turbine with adjustable turbine blades was developed (Fig. 10) [15]. This solution was implemented at the HPP Weinzödl power plant (AUSTRIA, on the Mur River) in 1982 with two parallel STRAFLO-Turbines with a diameter of 3.7 m and a total capacity of 8 MW. This type of solution has been extensively tested by the Institute for Hydraulic Fluidmachinery at Graz University of Technology and described in the article [15] in which a 400 mm diameter turbine was tested at a speed of 1500 rpm (Fig. 11). Multivariate tests were carried out to analyze power losses, the amount of leaking water, the strength of various sealing materials for different levels of water pollution and various displacements of the turbine rotor in relation to the generator. 11.Laboratory stand for testing a dry fracture in an integrated turbine [15]. Sealing tests with clean water showed a power loss of 2% and a water leakage of 0.04 l / s. Water contamination with sand resulted in an almost twofold increase in power loss and water leakage through the seal. Long-term tests showed significant damage to the sealing materials (Fig. 12). Fig. 12. An example of the destruction of the integrated turbine seal for a dry gap [15] In order to reduce the wear of the seal, a system of cleaning the seal with pressurized water using a dedicated labyrinth was proposed. The proposed solution has served its purpose, but it is technically complicated. Conclusions The study presents various existing design solutions for the integration of a water turbine with an electric generator in order to enable the compact construction of a hydro unit. Technical and operational analysis of these solutions allowed to develop the direction of integration characterized by high efficiency and reliability. This was achieved through the elimination of a mechanical gear, the use of a synchronous generator with permanent magnets and an integration method that allows the elimination of water from the generator gap through an installation similar to submersible pumps. Additionally, the implementation of various solutions of electric energy conversion systems was taken into account, on the basis of which the system using the power electronic converter was selected. This solution enables trouble-free operation of the generator with the power system and wide regulation possibilities. Variable speed operation allows for dispensing with mechanical control systems [16][17]. The advantage is the possibility of adaptive operation of the hydro unit with variable rotational speed to the conditions in the river, which will facilitate the creation of a series of types of machines. Variable rotational speed may bring additional benefits in the form of simplification of the structure of the hydro unit. The unification of machines and multiple construction of units with the same diameter significantly reduces the costs of assembly and service. In summary, it can be stated that the integration of the turbine with a direct-drive generator for the lowest heads has a technical and economic justification. Construction investment are significantly lower than for classic solutions. The full integration is attractive because it simplify the hydroset construction and don't limits the flow cross-section. However, the problems related with power losses and electrical insulation in case of wet integration, or complicated sealing system in dry integration, limits the solutions reliability. The classical bulb housing with a synchronous generator is characterized by simpler and cheaper implementation, reliability and the lack of lubricating materials.
3,262
2021-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
PERFORMANCE OF THE FIXED-POINT AUTOENCODER Original scientific paper The model of autoencoder is one of the most typical deep learning models that have been mainly used in unsupervised feature learning for many applications like recognition, identification and mining. Autoencoder algorithms are compute-intensive tasks. Building large scale autoencoder model can satisfy the analysis requirement of huge volume data. But the training time sometimes becomes unbearable, which naturally leads to investigate some hardware acceleration platforms like FPGA. The software versions of autoencoder often use single-precision or double-precision expressions. But the floating point units are very expensive to implement on FPGA. Fixed-point arithmetic is often used when implementing autoencoder on hardware. But the accuracy loss is often ignored and its implications for accuracy have not been studied in previous works. There are only some works focused on accelerators using some fixed bit-widths on other neural networks models. Our work gives a comprehensive evaluation to demonstrate the fix-point precision implications on the autoencoder, achieving best performance and area efficiency. The method of data format conversion, the matrix blocking methods and the complex functions approximation are the main factors considered according to the situation of hardware implementation. The simulation method of the data conversion, the matrix blocking with different parallelism and a simple PLA approximation method were evaluated in this paper. The results showed that the fixed-point bit-width did have effect on the performance of autoencoder. Multiple factors may have crossed effect. Each factor would have two-sided impacts for discarding the "abundant" information and the "useful" information at the same time. The representation domain must be carefully selected according to the computation parallelism. The result also showed that using fixed-point arithmetic can guarantee the precision of the autoencoder algorithm and get acceptable convergence speed. Introduction Deep Learning technology has inspired enormous investment from the famous companies such as Google, Facebook, Microsoft, IBM and Baidu.It has been widely studied and used in the Machine Learning community with successful results demonstrated with various models like Deep Belief Networks (DBNs) [1], sparse AutoEncoder [2], Sparse Coding [3], Deep Convolutional Neural Networks (CNNs) [4] and etc.Among these models, AutoEncoder (AE) is one of the most popular models which are mainly used in unsupervised feature learning for recognition and mining of images, speeches and vision.In the famous demonstration of Google [5], AE combined with other models is applied to a problem of larger scale unsupervised learning from internet images recognition on a cluster of 1,000 machines (16,000 cores). Running an AE is a time-consuming task because it involves multilayer iterations of large scale matrix operations which have strong dependency.Reducing the training time of an AE is one critical barrier which limited its advanced adoption for building deep structures.Jin [6] designed a behaviour model of auto-encoder in Verilog for FPGA parallel implementation.The similar example is a RBM (a building block of DBN) of 256×256 nodes, which was tested on FPGAs and gained a speedup of 145fold over an optimized C program running on a 2,8 GHz Intel processor [7].The processing characteristic of AE is very similar with DBN.Its acceleration on FPGA is also an attractive topic under investigation and their overall computational time is expected to improve. Parallel implementations of deep learning structures often use vast and regular processing units to map the model nodes partially or wholly at a time.Weights and neuron values are stored in on-chip RAM during processing and are swapped out to off-chip memory after processing.It is too expensive to support a large number of floating-point units on chip and store values using the standard double precision floating-point representations in on-chip RAMs.Many of the previous attempts with FPGAs for machine learning algorithms used the fixed and regular bit-widths (8 bits, 16 bits or 32 bits) [8,9] without analysing in depth the implications for accuracy.Previous works also have mainly analysed the impact of bit-widths on accuracy and execution time of RBM [10,11]. There is an interesting enlightenment from the denoising stacked AEs [2], which corrupt inputs in a stochastic way to gain better performance.For the similar reason, converting double precision floating-point arithmetic to fixed-point arithmetic will lose some information of inputs as well as intermediate data.The training process becomes more "coarse" than before because of such approximation.Some redundant and useless information in high-dimensional input may be discarded during processing and then features can be learnt more easily.Meanwhile, some critical information may be lost and make the feature more indistinct to be learnt.The suitable bit-widths used in AE are expected to make the approximation advantages outweigh its disadvantages, keep or even improve the final performance. Speed and resource usage in FPGAs are sensitive to the bit-width as many logics are mapped to fine-grain LUTs.As AEs have grown in size to satisfy the deep learning demands of contemporary applications, resource saving due to narrower bit-widths has become more attractive to implement larger processing array in FPGAs.There is no relevant research on the arithmetic effects on AE.It is much less clear whether there is an optimal choice of bit-width which can achieve area efficiency and best performance at the same time.This paper reports a comprehensive study on performance of the fixed-point AE. The AE model The AE model is an unsupervised learning structure including three layers: the input data layer, the code layer and the restruction data layer.The model encodes the input data to a set of codes and then decodes them to get the restruction data.Fig. 1 shows the procedure.The restruction output is considered as an equivalent expression of the input, so the model defines a cost function to minimize the error between the input and the restruction output.Eq. (1) defines the cost function as the summary of the restruction terms indicating the error, the bound term restricting the scale of the weight matrix (W), and the sparsity term controlling the sparse state of the code y.The model uses the variation of the error to update the parameters of the encode model and decode model, thus learning the feature of the input. ). In Eq. ( 1), the sparsity term is used to avoid learning the identity mapping from input to output.There are other methods to achieve this goal.The stacked denoising AE (SDAE) is one of the most efficient variants of AE.The SDAE model denoises its inputs in a corruption level.This is done by first corrupting the initial input v to get a partially transformed version v ~ by means of a stochastic mapping ( ) to train the AE model in Eq. ( 1).The distribution function of q D can use additive Gaussian Noise, random zeroing noise, and salt-pepper noise as well [2].Empirical results showed that SDAE can perform better than non-denoised ones with a suitable corruption level, which gives us a heuristic idea that precision reduction can achieve a similar effect. The AE classification The AE model can capture the features of the input data in an unsupervised way.For the typical applications of object classification, a classifier layer is often added at the top of the AE, forming the whole application structure.Fig. 2 shows the execution flow of the typical AE classification.The whole process is divided into three stages: the AE pretrain, the classification train and the prediction.When pretraining, the AE layer calculates the restuction data by encode function f and decode function g.Then the errors (X−X r ) are used to calculate the residues of the encode layer and decode layer by function p and q.The gradients of the parameters (θ1 and θ2, including the weights of the model and some biases) are calculated by function u and v.The optimization methodof Non-Linear Conjugate Gradient is used to search the optimal value of the model parameters.The train data are divided into batches to process and the model parameters are updated in batches for Maxepoch times. Update θ1 Update θ2 Residue Residue The classifier layer firstly encodes the train data by the f function with the pretrained parameters.It often uses a logistic regression model [12] (like softmax) to generate actual labels, comparing with the prepared label to generate the gradients of its model parameters (θs).This process is similar with the pretrain.After the whole model is trained, the updated model parameters are used to do the prediction.This process is relatively simple compared to the train process.In Fig. 2, the procedure of train is the most time-consuming core.Moreover, the core may be processed for many times searching the satisfied gradient, thus occupies most of the training time. 3 Fixed-point processing of AE Conversion of data format The software version of AE classification algorithm uses double precision floating-point representations which need 64 bits for a data.When the algorithm is implemented in hardware, the fixed-point data format is used to save area.The fixed-point representation expresses a data with bit-width of n, which includes one signed bit, integer part with bit-width of m−1 and the fractional part with bit-width of n−m−1.Its domain is often much smaller than the floating-point one, as Fig. 3 shows.The method of data format conversion corresponding to hardware implementation is domain truncation.Firstly, for the positive data larger than the MaxPD or smaller than the MinPD, it would be set to MaxPD or MinPD respectively.For the negative data larger than the MaxND or smaller than the MinND, it would be set to MaxND or MinND respectively.Thus, we constrain the domain of a floating-point data to the domain that an bits fixed-point data can represent.Secondly, each data would be amplified by a factor of 2 n−m , rounded to the nearest integer and then divided by 2 n−m , thus constraining the data to the fixed-point representation domain.This method introduces additional operations of comparison, multiplication and division for each data element in the algorithm, which would increase the simulation time. Matrix blocking for fixed-point operations For the complex matrix operations like matrix multiplication in AE, parallel multiply-accumulators are often used, as shown in Fig. 4. The operands are stored on distributed block RAM, which bit-width is n bits.A 2n bits partial product can be produced by the n bits multiplier.An accumulator with larger bit-width can be used to accumulate the partial product, avoiding the precision lost and not increasing much logic cost at the same time.So, we often chose a bit-width in the range of n bits to 2n bits for the adder and the accumulator.Only the bit-width of the final result which needs to store back to on-chip RAM is constrained to n bits.The partition of the integer part and fractional part for the result depends on the representation range of the data, which can be implemented by shifters. Under the implementation assumption above, it is more reasonable than maintaining the precision of a block matrix multiplication instead of converting the partial product for each element.Assuming that we can chose enough wide bit-width for the accumulation operation, thus we only need to cut down the bit-width to n bits for the result of a block multiplication when simulating the fixed-point operations.From this observation, we converted all matrix operations in AE to a loop code of block matrix operations and converted each element of the block result to a fixed-point representation described in section 3.1. The sigmoid function approximation The sigmoid function is used a lot in the AE model.A software version of exponential function and division was used to calculate this function.As it is very expensive to implement exponential function and division directly on large scale parallel hardware, approximations applicable to hardware implementation must be considered.The sigmoid function approximation impact should be evaluated. The Piecewise Linear Approximation of nonlinearity algorithms (PLAs) [13,14,15] are one group of the most typical approximation methods which are suitable for the design choices of small number of units and high precision, thus are suitable for implementation of vastly replicated units.Two PLAs were evaluated in [11].We chose the less precise one showed in Tab. 1 in our experiment.A PLA module built in hardware only uses comparer, shifter and adder which are very simple.Fig. 5 shows the sigmoid function of software version (the solid line), the sigmoid function of PLA (the fold line) and the absolute error of PLA compared with the software version (the curve, the error value is enlarged ten times to the output y scale for clarifying).The maximum absolute error is 6,79 %.We need to evaluate the approximation effect.In our experiments, MNIST classification was selected as the objective application because of its popularity in machine learning studies.The dataset is 5,000 training samples and 1000 testing samples of 28×28 pixel images of the digits.The model is with size of 784-400-10.The batch size is 100.The Maxepoch of pretrain is 10 and the Maxepoch of classifier train is 200.Using the methods described in section 3, we rewrote all the train processes in Fig. 2. The bit-width and the blocking number are all parameterized.All experiments were done in Matlab2010a. When considering a fixed-point representation for real numbers, the integer part of a number mainly influences the representation scope while the fractional part mainly decides the precision.So, we experimented various combinations of the integer part and the fractional part with various converting methods to evaluate the influence of precision change.All the programs are running on a PC using Intel® Quad CPU Q8200 in 2,34 GHz and 2 GB memory.The AE classification rate using double precision floating-point representations is about 90,1 %. We firstly evaluate the domain truncation method in section 3.1.Fig. 6 shows the AE classification performance using various bit-widths of integer part and enough wide bit-width of fractional part.When we searched to 4 bits, the performance becomes acceptable which is approached to the performance of the software version (90,1 %).We selected 5 bits integer part and decided the fractional part.Fig. 7 shows the performance.It can be seen that when the bit-width reaches 20 bits (that means the bit-width of fractional part is 20 -5 − 1=14 bits), the performance is 88,8 % which is acceptable.We chose the same bit-widths in Fig. 7 and added matrix blocking method on the simulation.Figure 8 shows the performances using the block number of 32, 64 and 128 respectively, comparing with the performance of no matrix blocking.It is clear that the performance in the corresponding bit-widths becomes worse, especially the performance using the critical point bit-width (20 bits).There is no obvious trend with the block number change when using wider bit-width.The overall results of Figure 8 mean that the hardware computation parallelism may affect the AE performance but not in a monotone way. We continued to add PLA method on the simulation.Fig. 9 shows the results.The performances are very similar with the corresponding ones in Fig. 8 using bitwidth more than 20 bits.It means that the precision loss of PLA did not exceed the precision loss of bit-width cut, thus not affecting the overall performance.For the critical point of 20 bits, no monotone trend was observed because of the interaction effect of the PLA approximation and the blocking number.Our work gives a comprehensive evaluation for the performance variation when converting the floating-point algorithms to a fixed-point one for implementation of AE algorithms on FPGAs with large scale fixed-point units.The method of data format conversion, the matrix blocking and the sigmoid function approximation are the main factors that must be considered when implementing AE in large computation arrays.The simulation method of the data conversion, the matrix blocking with different parallelism and a simple PLA approximation method were evaluated in this paper.The results showed that the fixed-point bit-width did have effect on the performance of AE.Multiple factors may have crossed effect.Each factor would have two-sided impacts for discarding the "abundant" information and the "useful" information at the same time.We must constrain the representation domain of the data carefully and select available bit-width according to the computation parallelism.The result also showed that using fixed-point arithmetic can guarantee the precision of the AE algorithm and get acceptable convergence speed. Figure 1 Figure 1 The AutoEncoder model Figure 2 Figure 2The execution flow of the AE classification Figure 3 Representation domain constraint Figure 5 Figure 5 Sigmoid function and absolute error of PLAs 4 Experimental result and analysis Figure 6 Figure 7 Figure 6 Performance of domain truncated AE Classification searching for bit-width of integer part Figure 8 Figure 9 Figure 8 Performance of fixed-point AE Classification using matrix blocking Table 1 Piecewise linear approximation algorithm
3,775.8
2016-02-19T00:00:00.000
[ "Computer Science" ]
Effects of drug-resistant mutations on the dynamic properties of HIV-1 protease and inhibition by Amprenavir and Darunavir Molecular dynamics simulations are performed to investigate the dynamic properties of wild-type HIV-1 protease and its two multi-drug-resistant variants (Flap + (L10I/G48V/I54V/V82A) and Act (V82T/I84V)) as well as their binding with APV and DRV inhibitors. The hydrophobic interactions between flap and 80 s (80’s) loop residues (mainly I50-I84’ and I50’-I84) play an important role in maintaining the closed conformation of HIV-1 protease. The double mutation in Act variant weakens the hydrophobic interactions, leading to the transition from closed to semi-open conformation of apo Act. APV or DRV binds with HIV-1 protease via both hydrophobic and hydrogen bonding interactions. The hydrophobic interactions from the inhibitor is aimed to the residues of I50 (I50’), I84 (I84’), and V82 (V82’) which create hydrophobic core clusters to further stabilize the closed conformation of flaps, and the hydrogen bonding interactions are mainly focused with the active site of HIV-1 protease. The combined change in the two kinds of protease-inhibitor interactions is correlated with the observed resistance mutations. The present study sheds light on the microscopic mechanism underlying the mutation effects on the dynamics of HIV-1 protease and the inhibition by APV and DRV, providing useful information to the design of more potent and effective HIV-1 protease inhibitors. While human immunodeficiency virus (HIV) enters target cell, its RNA is transcribed into DNA through reverse transcriptase which then integrates into target cell's DNA and rapidly amplifies along with the replication of target cell. The HIV-1 protease (HIV-1 PR) is essential to the replication and invasion of HIV as protease is responsible for cleaving large polyprotein precursors gag and releasing small structural proteins to help the assembly of infectious viral particles [1][2][3] . HIV-1 PR is a symmetrically assembled homo-dimer, consisting of six structural segments (Fig. 1a): flap (residues 43-58/43'-58'), flap elbow (residues 35-42/35'-42'), fulcrum (residues 11-22/11'-22'), cantilever (residues 59-75/59'-75'), interface (residues 1-5/1'-5' , 95-99/95'-99'), and active site (residues 23-30/23'-30') 4,5 . So far two distinct conformations have been experimentally observed, mainly on the flap regions (two β -hairpins covering the large substrate-binding cavity): the flaps take a downward conformation towards the active site (closed state) when a substrate is bound, which, however, shift to a semi-open state when there is no bound substrate. The orientation of two β -hairpin flaps in the two states is reversed 6,7 . Although no fully open state has been measured by X-ray crystallography experiment yet 3,[8][9][10] , which is probably attributed to its short transient lifetime, reasonable speculation has been proposed that flaps could fully open to provide access for the substrate and then the residues of Asp25 and protonated Asp25' in the active site of the protease aid a lytic water to hydrolyze the peptide bond of substrate, producing smaller infectious protein 11,12 . Subnanosecond timescale NMR experiment by Torchia and coworkers [13][14][15] suggested that for substrate-free (apo) HIV-1 PR, the semi-open conformation accounts for a major fraction of the equilibrium conformational ensemble in aqueous solution, and a structural fluctuation is measurable on flap tips which is in a slow equilibrium (∼ 100 μ s) from semi-open to fully open form. However, due to high flexibility of HIV-1 PR in aqueous solution, it is still difficult for NMR to provide detailed structural data for fully open conformation. Molecular dynamics (MD) simulation, as an attractive alternative approach, has been extensively utilized to explore atomic-level dynamic information of flap motion. Scott and Schiffer 16 reported irreversible flap opening transition in a MD simulation starting from the semi-open conformation of apo HIV-1 PR, which pointed out that the curling of flap tips buries the initially solvent accessible hydrophobic cluster and stabilizes the open conformation of HIV-1 PR. Similar but reversible flap opening event was also discovered by Tozzini and McCammon using coarse-grained model for 10 μ s simulation 17 . In addition, the MD simulation by Hornak et al. 7 6,18 . It is worth noting that the open conformations found in the abovementioned MD simulations are somehow not exactly the same: although the active site is fully exposed, the flap tips are completely upward-oriented in the discovery of Hornak et al. 7 but have downward curling conformation in the discovery of Scott and Schiffer 16 . So far ten protease inhibitors have been approved by Food and Drug Administration (FDA), including squinavir (SQV), ritonavir (RTV), lopnavir (LPV), atazanavir (ATV), nelfinavir (NFV), indinavir (IDV), tipranavir (TPV), amprenavir (APV), fosamprenavir (FPV, prodrug of amprenavir), and darunavir (DRV or named as TMC114) [19][20][21][22] . These inhibitors bind to the active site of HIV-1 PR, block gag-pol, and thereby prevent the formation of mature virus particles in vitro. Nevertheless, because of the error-prone property of HIV-1 reverse transcriptase (HIV-1 RT), mutations in HIV-1 PR arise during the course of treatment, inducing drug-resistance to abovementioned inhibitors 4 . The flexibility of flap regions which is vulnerable to mutation controls the equilibrium among the three functional conformations: mutations in HIV-1 PR can make the flaps more flexible and destabilize closed conformation, which might increase the opening rate of flaps and thus release the inhibitor. Kinetic experiments show that the L90M, G48V, and L90M/G48V variants could reduce the binding affinity of inhibitors, which is caused by an increase in dissociation rates 4 . Perryman et al. reported that the protease variant with mutation sites in 80 s loops (V82F/I84V) shows more frequent and rapid flap curling than wild-type (WT) HIV-1 PR does 4,23 . Similarly, the I50V mutation in flap regions selected by APV 1 shows more flexible flaps 24 , and single mutation distant from flap regions such as L63P or L10I can increase the flexibility of flap regions as well 25 . Hence, the dynamics of flaps changed by local or distal mutation is likely involved in increasing dissociation rates and thus reducing the efficiency of drugs 4 . More severely, the accumulation of single mutations causes serious cross drug resistance which reduces the potency of two most effective drugs, APV and the chemically similar inhibitor DRV (the single-ringed tetrahydrofuran (THF) group of APV is replaced by double-ringed bis-THF in DRV) 26,27 . Therefore, not only single mutation effect but also the cooperative effect of multi-mutations on HIV-1 PR should be studied to aid the design of novel drugs with better potency. The isothermal titration calorimetry (ITC) experiment by King et al. indicated that the small structure difference of APV and DRV inhibitors lead to apparently different binding affinities towards WT HIV-1 PR and its two multi-drug-resistant (MDR) variants, namely Flap+ (L10I/G48V/I54V/V82A) and Act (V82T/I84V) 27 . Flap+ has a combination of mutations mainly in the flap region whereas the mutations in Act are solely focused in 80 s (80's) loop (Fig. 1). Accordingly, in comparison to WT protease, the binding affinity of the inhibitors to Flap+ and particularly Act variants is reduced. To understand the molecular mechanism underlying different binding affinity of the two chemically similar inhibitors to HIV-1 PR and the detailed mutational effects on the dynamics of protease as well as protease-inhibitor interactions, the present study ran long-time MD simulations on apo, APV-bound, and DRV-bound WT HIV-1 PR and its Flap+ and Act variants. In addition, the absolute binding free energies of APV and DRV to the three HIV-1 PRs were calculated using free energy perturbation (FEP) method, which are very consistent with the experimental data 27 . The all-atom MD simulations observe a conformational transition from initial closed structure to a semi-open structure only for Act variant but not for WT protease and Flap+ variant when there is no inhibitor bound. The detailed analysis reveals the origin of the amplified structural flexibility of the Act variant: the hydrophobic interactions between flaps and 80 s (80's) loop residues (mainly I50-I84' and I50'-I84) along with the hydrophobic interactions between the two flaps play important roles in maintaining the closed conformation of HIV-1 PR; the double mutation of the 80 s (80's) loop residues (V82T/I84V) weakens the hydrophobic interactions between flap and 80 s (80's) loop regions and thus destabilize the closed conformation. APV or DRV inhibitor binds with HIV-1 PR via hydrophobic interaction as well as hydrogen bonding: the two phenyls and one isobutyl groups of the inhibitor collaborate with the hydrophobic residues of I50 (I50'), I84 (I84') and V82 (V82') to create hydrophobic core clusters to enhance the stability of the closed conformation of the flaps; meanwhile the inhibitor forms multiple hydrogen bonds with the active site of HIV-1 PR. The combined change in the two kinds of protease-inhibitor interactions is correlated with the observed resistance mutations. Results Comparison of the binding thermodynamics measured by FEP calculation and experiment. In principle, the accuracy of theoretical simulation largely depends on the molecular force field used. In the present study, AMBER FF03 force field was used to model protein and the general amber force field (GAFF) was assigned to model APV and DRV inhibitors, respectively. To indicate the efficiency of the present study in measuring the molecular interactions of HIV-1 PR and the two inhibitors, the protease-inhibitor binding free energies were calculated for all complex systems under study by FEP method 28 . One can see from Table 1 that the calculated value of the binding free energy is in well agreement with the experimental data for each complex system. In addition, for each inhibitor, its binding free energy with WT HIV-1 PR is more negative than those with Flap+ and Act. Meanwhile, the binding free energy of DRV is generally more negative than that of APV with respect to individual proteases, suggesting that the force fields used here can essentially reflect the change in protease-inhibitor binding interactions induced by the residue mutation of HIV-1 PR as well as the structure change of inhibitor. Higher flap flexibility in Act variant than in wild-type HIV-1PR and Flap+variant. To see the effect of amino acid mutation on the structure stability of apo HIV-1 PR, root-mean-square fluctuations (RMSFs) of individual residues for the three apo proteases were calculated. As shown in Fig. 2a, the RMSFs in Flap+ variant have similar shape but slightly smaller values than those in WT HIV-1 PR. Intriguingly, the RMSFs in Act variant have apparently larger values, particularly for residues in flap (50 s/50's), 80 s loop and active site (25-30/25'-30'), suggesting larger flexibility in these regions ( Fig. 2a-d). To inspect the motion difference on flaps among the three apo systems, we calculated the inter-residue distances of I50-I50' , D25-I50, D25'-I50' that have been often used to reflect the horizontal and vertical motion of flaps 4,24,29 . Figure 3 shows that there is a remarkable difference in the motion of flaps in Act as both WT and Flap+ proteases show similar motion inferred from the mild fluctuation of the three abovementioned distances. One can find that at ~220 ns, the three inter-residue distances in Act variant reach their climaxes simultaneously, corresponding to the large motion of flaps at this time. Snapshot conformation was extracted to visualize the structure at 220 ns and a semi-open conformation was achieved. As shown in Fig To further assess the amplified dynamic properties of Act variant with respect to the WT and Flap+ , the distance distributions involving the residues in flap tips were calculated for the three apo proteases (Fig. S2 in the supplementary material online). The distance between the tip of the flaps (50-50') is centered at similar position for WT protease and Flap+ but is shifted to larger value for Act. The flap tip in one monomer is farther away from the 80 s loop in the other monomer (80-50') in Act in comparison to the WT and Flap+ . In addition, the intra-monomeric C α distance of 50-80 is shortest in Flap+ , in the middle in WT, and longest in Act. The inter-monomeric distance between the flap tip and active site (25-50') remains unchanged among the three protease systems but the intra-monomeric distance of 25-50 is elongated in Flap+ and Act mutants. The changes in these distances further indicate the more open conformation of the flaps in Act than those in the other two proteases. In contrast to rather rigid active site region, the inter-monomeric distance of 80 s loops (80-80') which form the inner walls of the active site are shortened in Flap+ but is elongated in Act compared to that in WT protease. The features of all Important roles of hydrophobic interactions between Ile50 (Ile50') in flap and Ile84' (Ile84) in 80's (80 s) loop in adjusting the dynamics of flap. To explore the inherent reason why the flap flexibility is amplified in Act variant, general correlation analysis, developed by Oliver F. Lange 30 , was performed for the three apo protease systems under study to reveal the correlation between flaps and the remaining parts of HIV-1 PR (Fig. 5). The strongest correlation can be seen between the flap tips in chain A (residues around I50) and chain B (residues around I50'), which can be explained by the direct inter-residue interactions between the two flaps, e.g., the hydrophobic interactions of I47-I50' , I50-I54' pairs and the counterparts of I47'-I50 and I50'-I54. In addition, the flap tip in chain A (B) also shows strong correlation with 80's (80 s) loop in WT HIV-1 PR. This correlation is, however, attenuated in Flap+ and particularly in Act variant. Hydrophobic interactions between I50 in flap A and I84' in 80's loop as well as I50' in flap B and I84 in 80 s loop are the key interactions between flap and 80 s (80's) loop regions which can be clearly seen in the crystal closed structure of WT HIV-1 PR (Fig. 6). Such hydrophobic interactions along with the hydrophobic interactions between the two flaps might contribute for fastening the flaps. As the hydrophobic side-chain of I84 is shortened in Act (I84 → V84), its contribution should be certainly decreased. In order to analyze the hydrophobic interactions of I50-I84' and I50'-I84, the side-chain distances of the two pairs are plotted in Fig. 6a, b for WT HIV-1 PR and the Flap+ and Act variants. One can see that the side-chain distance of I50-I84' or I50'-I84 mainly stays in low region (~6.0 Å), indicating a strong hydrophobic interaction between these residues in WT HIV-1 PR. In Flap+ , the distance of I50'-I84 still keeps steady and small but the distance of I50-I84' is slightly increased, suggesting weaker hydrophobic interaction between flap and 80 s loop. In contrast, the replacement of I84 by valine leads to the large fluctuation of the side-chain distance of I50-V84' in Act and a high peak can be seen clearly at ~220 ns, corresponding to the partial opening of the flaps (Fig. 3 and 4). In addition, the fluctuation of I50'-V84 side-chain distance in Act is also larger than those in the other two proteases (Fig. 6b). Peaks can be also seen at the position of 70 ~ 100 ns, corresponding to a slight upward motion of the flap in chain B (see Supplementary Fig. S3 online). These observations are consistent with our speculation that the hydrophobic interaction between I50 (I50') in flap and I84' (I84) in 80's (80 s) loop plays an important role in locking the flaps in closed conformation and the mutation of I84 by less hydrophobic residue might loosen the flaps. Another hydrophobic residue in 80 s (80's) loop, V82 (V82'), could also form hydrophobic interaction with the flap residue I50' (I50) but its strength is weaker than that of I50-I84' (I50'-I84), as revealed by the longer distance in Fig. 6c, d. The replacement of V82 by alanine or threonine further elongates the inter-residue distance and thus further weakens the hydrophobic interactions. Therefore the interactions from V82 to I50' and from V82' to I50 are at least not as important as the interactions from I84 to I50' and from I84' to I50 in stabilizing the closed conformation of HIV-1 PR. Molecular interactions between HIV-1 PR and DRV/APV inhibitors. To see the influence of inhibitor binding on the flap dynamics of HIV-1 PR, the distances between D25 and I50, D25'and I50' , I50 and I50' for APV or DRV bound WT, Flap+ , and Act proteases are plotted in Fig. S4 in the supplementary material online. The decrease in the fluctuation of these distances can be seen in the presence of these inhibitors, particularly for Act variant. In addition, the decrease in distance fluctuation is generally more apparent for the binding of DRV than the binding of APV, suggesting stronger inhibiting effects of the former inhibitor than the latter. The change in RMSF of the C α atoms of individual residues in one monomer of HIV-1 PR induced by the inhibitor binding was also measured. One can see from Fig. 7 that the regions involved in the inhibitor contacting (flaps, active site, and 80 s loop) are less flexible (with negative Δ RMSF) as APV or DRV is bound. This restriction is more pronounced for Act variant. Interestingly, the other regions which are not subjected to the contacts from the inhibitor become more flexible in the inhibitor-bound state. These two effects compensate with each other and as a result the average flexibility of the protease might be not highly influenced by the inhibitor binding, consistent with the observation in previous molecular simulation by Cai et al. 31 . As discussed earlier, the hydrophobic interactions of I50-I84' and I50'-I84 play an important role in holding the flaps in their closed conformations. As DRV or APV enters the binding pocket of HIV-1 PR, the positioning of the inhibitor allows its two phenyl groups and one isobutyl group to participate into the hydrophobic interactions with the hydrophobic pairs of I50-I84' and I50'-I84, which creates stable hydrophobic core clusters and thus could enhance the stabilization of the closed conformation of HIV-1 PR (Fig. 8). The time series of the total number of hydrophobic contacts (HC) between APV or DRV and the protease was plotted in Fig. 8 for all complex systems. In addition, the total number of hydrophobic contacts was averaged throughout the simulation trajectory for each complex system and the error was estimated with 10 ns/block averaging. One can see that the numbers are more or less similar for WT HIV-1 PR and Flap+ variant no matter which inhibitor is bound (the averaged values are 10.93 ± 0.34 for WT-APV, 10.71 ± 0.26 for Flap+ -APV, 9.38 ± 0.34 for WT-DRV and 10.86 ± 0.49 for Flap+ -DRV, respectively). The error bars are quite small compared to the detailed HC numbers, indicating the simulation convergence. In contrast, the hydrophobic contacts in Act are apparently less (the averaged values are 9.07 ± 0.23 for Act-APV and 7.91 ± 0.38 for Act-DRV), which can be attributed to the two mutations of V82 and I84 by less hydrophobic residues in the protease. The van der Waals (vdW) contacts between specific APV or DRV moieties (Fig. 9a) and HIV-1 PR were analyzed in details (Fig. 9b). The calculated vdW interaction energies between various moieties of DRV and WT protease (Fig. 9b right) are more or less close to the counterparts as shown in Fig. 2B of Ref. 41. The mutation of WT protease to Flap+ or Act leads to the loss of vdW interaction energies between the protease and DRV moieties. The detailed values of the loss of vdW interaction energies for Table 1 of Ref. 32). Therefore, while Ref. 32 indicated that the impact of Act mutation on DRV contacts is mainly focused on the P1' and P2′ moieties, the present study suggests that the impair is mainly on P1′ and P2. The impact of Flap+ mutation on DRV contacts is, however, larger at the central P1 and P1' than the marginal P2 and P2' moieties ( Fig. 9c right). For APV inhibitor, the residue mutation induces more complex impact on protease-inhibitor contacts: the mutation of WT to Flap+ impairs vdW interaction of P1' and P2 moieties but meanwhile promotes vdW interaction of the remaining moieties of APV; the mutation of WT to Act impairs vdW interaction of the most moieties (P2, P1' , and P2′) but meanwhile promotes vdW interaction of the P1 moiety of APV. The vdW interaction energies between the whole molecule of APV or DRV inhibitor and individual representative active site residues were also calculated and depicted in Fig. 9d, along with the corresponding changes in vdW interaction energy in mutant structures (Flap+ and Act) relative to the WT complex (Fig. 9e). One can see that DRV induces asymmetry in protease-inhibitor contacts (Fig. 9e right). As a result, although the residue mutating is identical in both monomers, the effects of the mutations on protease-inhibitor contacts are not uniform in the two monomers. As a matter of fact, it is mainly the residues of 25, 30, 47, and 50 in one monomer that lose vdW interactions in DRV-bound flap+ and Act variants. In contrast, the impairment of vdW interactions towards APV spreads over the two monomers in flap+ and Act variants (Fig. 9e left). Hydrogen bonding interaction along with the hydrophobic interaction are the main contributors in protease-inhibitor binding. The number of protease-inhibitor hydrogen bonds (HB) was calculated for the complex systems under study (Fig. 10). One can see that the HB numbers formed by DRV with WT protease and the variants are more or less similar (the averaged values are 4.58 ± 0.30 for WT-DRV, 5.27 ± 0.42 for Flap+ -DRV, and 4.70 ± 0.25 for Act-DRV, respectively). In addition, the HB number between APV and WT protease (3.39 ± 0.41) is slightly smaller than that between DRV and WT protease. The HB number between APV and Act fluctuates a lot and the HB number between APV and Flap+ is sharply decreased in comparison to those between DRV and corresponding protease systems. Therefore, while the average HB number between APV and Act is 3.92 ± 0.82, the average number between APV and Flap+ is only 1.28 ± 0.32. Generally speaking, the electrostatic interactions from APV to HIV-1 PR are weaker than those from DRV. Considering the similar numbers of the protease-inhibitor hydrophobic contacts for APV and DRV (Fig. 8a,b), the less electrostatic interaction of APV to HIV-1 protease could be the main reason for the experimentally observed weaker binding of APV than DRV. As a critical hydrogen bond, hydrogen bond formed between D25 and/or D25' residue and the hydroxyl group of APV or DRV is given additional attention 22 . While D25' is protonated in the present simulation, the hydroxyl group of either inhibitor mainly forms hydrogen bond with the side-chain OD2 atom of D25 (see Supplementary Fig. S5 online). The distance between side-chain OD2 atom of D25 and the hydroxyl oxygen (O3) of the inhibitor was calculated (see Supplementary Fig. S6 online). In WT HIV-1 PR, the hydrogen bond distance between D25 and APV mainly stays at low value (~2.7 Å) with occasional jumping to relatively high value (~4.0 Å). The hydrogen bonding of D25 and DRV is quite stable since its distance always keeps at low value. While the WT HIV-1 PR is mutated to Act or Flap+ , the hydrogen bonding becomes less stable, as revealed by larger fluctuation in the hydrogen bonding distance between D25 and APV or DRV. Meanwhile, the THF of inhibitors can also form hydrogen bonds with surrounding protease residues (see Supplementary Fig. S7 online). For instance, the O6 atom of THF in APV could form two hydrogen bonds with the backbone amide hydrogens of D29 and D30 although the occupancies are low (13.63 ± 8.80% and 16.28 ± 9.82% for D29-APV and D30-APV in WT HIV-1 PR, 50.48 ± 13.30% and 61.18 ± 14.94% in Act but only 3.72 ± 1.30% and 0.05 ± 0.02% in Flap+ ). The O6 of bis-THF in DRV has enhanced hydrogen bonding ability with protease (59.37 ± 10.67% and 83.70 ± 8.29% for D29-DRV and D30-DRV in WT HIV-1 PR, 17.28 ± 5.22% and 16.47 ± 6.50% in Act, and 51.67 ± 8.07% and 77.18 ± 8.00% in Flap + ). In addition, the O7 in bis-THF of DRV can also form additional hydrogen bond with residue D29 (occupancy is 45.07 ± 12.72% in WT, 89.03 ± 3.76% in Act, and 66.47 ± 4.37% in Flap + HIV-1 PR). Therefore, more hydrogen bonds from DRV than APV can be formed with protease. It has been reported that water may mediate the binding interaction of inhibitor and HIV-1 PR 20,34,33 . A highly conserved "bridge" water molecule 24 connecting the flap region and inhibitor can be found in the crystal structure of HIV-1 PR: the oxygen atom of this bridging water forms two hydrogen bonds with the backbone amide hydrogens of I50 and I50' and meanwhile the hydrogen atoms of the water form two hydrogen bonds with the O2 and O5 atoms of APV or DRV (see Supplementary Fig. S8 online). The occupancy of the bridging water is defined as the percentage of protein structures containing a water molecule connecting the flap region and inhibitor as described above once the thermal equilibrium has been reached (e.g., after 10 ns of the simulation), based on the entire simulation data. The occupancy can have values from 0 to 100% and such definition sketches a reasonable space scope for bridging water. In addition, since the bridging water can form four hydrogen bonds simultaneously with the flap region of protease as well as drug backbone which undoubtedly restricts the dynamics of the protease, the occupancy of bridging water could to some extent reflect the efficacy of drug. Table 2 indicates the average value of the bridging water occupancy as well as the average distance of hydrogen bonds with the bridging water involved. The significantly high values of the bridging water occupancy in WT-DRV and Flap+ -DRV complexes imply that the two complex systems contain bridging water consistently. However, when DRV is bound to Act or APV is bound to any HIV-1 PR system, the conservation of the bridging water is largely decreased. To understand why the bridging water is more conserved in DRV-bound protease systems, we next calculated the RMSD values of the two inhibitors with respect to their initial configurations. Figure 11 shows that the RMSD value of DRV is relatively lower than that of APV in each protease system. In addition, the RMSD fluctuation of either APV or DRV is largest as it is bound with Act protease. These tendencies are consistent with the occupancy of the bridging water ( Table 2), suggesting that the stability of the hydrogen bond network of the bridging water is mainly dependent with the configuration stability of the inhibitor. Discussion The appearance of drug-resistant HIV-1 PR mutations becomes one of the main challenges for AIDS therapy. Understanding the molecular mechanism of drug resistance is critical to the design of new drugs, which requires comprehensive information of the binding interactions of inhibitors and their influence to the dynamics of WT HIV-1 PR and the drug-resistant variants. The binding of APV with HIV-1 PR is tighter than the first-generation inhibitors by ~1 order of magnitude 35 . The chemically similar inhibitor, DRV, which has a second THF ring, binds with the protease even more tightly than APV by 2 orders of magnitude. Through comparing the crystal structures of APV and DRV bound WT HIV-1 PR and its MDR variant (L63P, V82T, and I84V) and calculating their corresponding binding thermodynamics, King et al. observed that the binding of the two inhibitors to the MDR variant is impaired but the impaired binding is still more favorable than those of first-generation inhibitors 36 . It was found WT-APV Flap+-APV Act-APV WT-DRV Flap+-DRV Act-DRV Table 2. The occupancy of the bridging water and the average distances of hydrogen bonds with the bridging water involved for APV and DRV bound HIV-1 PR complex systems. that the I84V substitution in HIV-1 PR reduces van der Waals (vdW) interactions with the inhibitors, which might account for the reduced binding affinities of both APV and DRV. The potency of APV and especially DRV against MDR viruses was attributed by the authors to a combination of their high binding affinity and close fit with the binding pocket 36 . More recent ITC experiment of a series of inhibitors including APV and DRV binding to WT HIV-1 PR and two variants (Flap+ and Act) by the same research group indicated that the Flap+ variant exhibits extremely large enthalpy-entropy compensation for all inhibitors, suggesting that the drug-resistant mutations in Flap+ directly modulate the binding thermodynamics of inhibitors 27 . The molecular mechanics Poisson-Boltzmann (or Generalized Born) surface area (MM-PB/GBSA) and thermodynamic integration (TI) calculation of the binding free energy of DRV towards the WT HIV-1 PR and its Flap+ and Act variants indicated that the vdW interaction energy is dominant whereas the contribution of electrostatic interaction energy is minor in the total binding free energy of DRV to protease 37 . The crystal structure comparison of flap variants (I50V, I54V, and I54M) bound with SQV and variants (G48V, I54V, and I54M) bound with DRV suggested that the change in polar interactions between protease and inhibitors have the best correlation with observed resistance mutations 40 . In the present study, the dynamic properties of WT HIV-1 PR and its two variants Flap+ and Act as well as their interactions with APV and DRV inhibitors were investigated with all-atom MD simulations and FEP calculation. The binding free energies between the inhibitors and proteases achieved by FEP calculation are in well agreement with the experimental data, suggesting that the molecular force fields used here are suitable for the description of the protease-inhibitor interactions. Using the same force fields, the comparative MD simulations obtained results (e.g., the change in protease RMSF induced by inhibitor binding and mutational effects, the change in vdW interaction energy between inhibitor moieties and protease induced by mutational effects, and the distribution of residue-residue distance) comparable to multiple previous MD simulations 31,32,38,41 . Higher structural flexibility of Act variant is observed in comparison with WT protease and Flap+ variant. Specifically, starting from the closed conformation, a semi-open conformation is reachable for Act but not WT and Flap+ proteases when there is no inhibitor bound. The detailed analysis indicates that besides the hydrophobic interactions between the two flaps, the hydrophobic interactions between flaps and 80 s (80's) loop residues (mainly I50-I84' and I50'-I84) also play important roles in maintaining the closed conformation of HIV-1 PR. As the hydrophobic side-chain of I84 is shortened (I84 → V84) and V82 is substituted by hydrophilic residue (V82 → T82) in Act, the contribution of the hydrophobic interaction from 80 s loop is certainly decreased, leading to the amplified structural flexibility of flaps. The essential role of the hydrophobic core including I50 (I50'), I84 (I84'), and V82 (V82') in modulating the activity of HIV-1 protease was also emphasized by Mittal et al. in their recent site-directed cysteine cross-linking experiment 39 . The interactions between HIV-1 PR and APV or DRV inhibitor are also illuminated in the present study. Both hydrophobic and hydrogen bonding interactions contribute to the protease-inhibitor binding. For instance, the two phenyl groups and one isobutyl group of APV or DRV participate into the hydrophobic interactions with the hydrophobic pairs of I50-I84' and I50'-I84 of HIV1-PR, which creates stable hydrophobic core clusters among the flap, 80 s (80's) loop, and the inhibitor and thus could enhance the stabilization of the closed conformation of HIV-1 PR. The double mutation of the two hydrophobic residues by less hydrophobic ones in Act reduces the total number of hydrophobic contacts. As a result, although the flexibility of the HIV-1 PR structure is constrained by the binding of APV or DRV, the constraint in the structural flexibility of Act is the lowest. On the other hand, either APV or DRV can also form hydrogen bonds with the active site of HIV-1 PR, e.g., the hydrogen bindings of the hydroxyl group of the inhibitor to D25 and the oxygen in THF group of the inhibitor to D29 and/or D30 of HIV-1 PR. The additional THF ring of DRV allows it to form additional hydrogen bond with HIV-1 PR in comparison with APV. The inhibitors can also connect to the flap region via the hydrogen binding of a bridging water. The presence of fewer hydrogen bonds from APV to HIV-1 PR, which might be correlated with the less conformational stability of APV in the binding pocket, accounts for the experimentally observed weaker binding of APV than DRV to HIV-1 PR. As a matter of fact, a novel inhibitor with tris-THF, GRL-0519 that is under clinical trial, demonstrates better potency than DRV, for the more complicated hydrogen bond network between tris-THF and the active site 42 . In summary, the present study reveals the microscopic mechanism of mutational effects on the dynamic properties of HIV-1 PR and provides an atomic-level picture of the binding interactions between APV/DRV inhibitors and HIV-1 PR as well as the structure-affinity relationship. We anticipate that the present study could provide useful information to the future design of more potent and effective HIV-1 PR inhibitors. Methods Molecular dynamics simulation. The initial coordinates of APV-WT, APV-Flap+ , APV-Act, DRV-WT, DRV-Flap+ , and DRV-Act complexes were obtained from the protein data bank (PDB) and their PDB codes are 3EKV, 3EKP, 1T7J, 1T3R, 3EKT, and 1T7I, respectively 27,43 . The corresponding apo HIV-1 PR systems were obtained by removing the inhibitors from corresponding inhibitor-bound complex systems. All the crystal water molecules were retained. Considering the importance of the protonation of Asp25/Asp25' in the HIV-1 PR, a proton was added to the oxygen atom OD2 in Asp25' in chain B of HIV-1 PR and kept undetached in the simulation 24,44 . All simulations were carried out by the program GROMACS version 4.5.3 using the NPT ensemble and periodic boundary condition. The AMBER FF03 force field 45 was applied to model protein and the general amber force field (GAFF) 46 was assigned to model the two inhibitors (APV and DRV). In each simulation, the apo or inhibitor-bound protease was solvated in a cubic box with TIP3P water molecules 47 , keeping the boundary of the box at least 14 Å away from any protein atoms. Counterions were added for charge neutralization of whole simulation system. Each simulation system was first subjected to energy minimization using the steepest descents algorithm. Subsequently, a 5 ns MD simulation was carried out to heat the system to 300 K with the protein and the inhibitor fixed using a harmonic restraint (force constant = 10 kcal/mol/Å 2 ), followed by another 5 ns MD simulation with the protein C α atoms and inhibitor fixed. Finally, based on the relaxed system, the long-time equilibrium simulation (production run) was run without any constraints for 500 ns. The temperature of each system was maintained at 300 K using a novel V-rescale thermostat 48 with a response time of 1.0 ps. The pressure was kept at 1 bar using the Parrinello-Rahman pressure coupling scheme 49 ( τ = 1 ps). The cutoff for Lennard-Jones interactions was set as 12 Å and the electrostatic interactions were calculated using the particle Mesh Ewald (PME) algorithm 50 with a real-space cutoff of 12 Å. The LINCS 51 method was used to restrain bond lengths that including hydrogen atoms, allowing an integration step of 2 fs. Free energy perturbation calculation. The binding free energies of the inhibitors (APV and DRV) to HIV-1 PR and its Flap+ and Act variants were calculated using the free energy perturbation method described by Shirts et al 28 In these simulations, the coupled state (λ = 1) corresponds to a simulation where the solute (APV or DRV) is fully interacting with the environment and the uncoupled state (λ = 0) corresponds to a simulation where the solute does not interact with the environment. Each window corresponds to an independent simulation that includes 1.5 ns of equilibration and subsequent 3.5 ns of data collection. The free energies were computed using the Bennett acceptance ratio (BAR) 52 . The simulation temperature was kept constant at 300 K by coupling the system to a Nose´-Hoover thermostat 53,54 (τ = 0.5 ps) and the pressure was kept at 1 bar using the Parrinello-Rahman pressure coupling scheme (τ = 1 ps). The cutoff for Lennard-Jones interaction was set as 12 Å. Generalized cross-correlation analysis. Cross-correlations of residues in simulation systems were calculated based on mutual information between all C α atoms of protein using the generalized correlation analysis approach developed by Lange and Grubmüller 30 . The g_correlation module in the GROMACS package was applied for the analysis.
8,527.4
2015-05-27T00:00:00.000
[ "Chemistry", "Medicine" ]
Conditional Lie-Bäcklund Symmetry Reductions and Exact Solutions of a Class of Reaction-Diffusion Equations The method of conditional Lie-Bäcklund symmetry is applied to solve a class of reaction-diffusion equations ut + uxx + Q(x)u2 x + P(x)u + R(x) = 0, which have wide range of applications in physics, engineering, chemistry, biology, and financial mathematics theory. The resulting equations are either solved exactly or reduced to some finite-dimensional dynamical systems. The exact solutions obtained in concrete examples possess the extended forms of the separation of variables. Introduction In this paper, we analyze a class of reaction-diffusion equations (RDEs) which admit certain conditional Lie-Bäcklund symmetries (CLBSs). Equation (1) can be simplified from the following RDEs: which have wide range of applications in physics, engineering, chemistry, biology, and financial mathematics theory [1][2][3][4][5]. Here the coefficient functions depend upon the variable which typically represents the value of the underlying asset, such as the price of a stock upon which an option is placed. There exist several special cases of (2), say, the Black-Scholes-Merton equation, the Longstaff equation [6], the Vasicek equation [7], and the Cox-Ingersoll-Ross equation [8], in which ( ) is zero. In order to keep the computations as simple as and consistent with this requirement, authors in [4] transform (2) into (1) by using an equivalence point transformation, namely, by means of ( , ) = ( , ) + ( ) , = ℎ ( ) , with ℎ ( ) 2 = 1 ( ) , Equation (2) can be transformed into Advances in Mathematical Physics where the primes denote differentiation with respect to x. Then, with the inverse transformation = ℎ −1 ( ), this equation can be performed as (1) on reversion to lower case variables and redefinition of the coefficient functions as appropriate. When ( ) = and ( ) = 0, (1) becomes which can simplify the linear form of under the transformation = log(V)/ + ∫ ( ) /2 + with 4 ( ) = −(2 + 2 + 4 ). It is known that symmetry reductions and exact solutions play important roles in the study of RDEs. The conditional Lie-Bäcklund symmetry (CLBS) method introduced by Zhdanov [9] and Fokas and Liu [10,11] firstly has been proved to be very powerful to classify equations or specify the functions appeared in the equations and construct the corresponding group invariant solutions. Furthermore, authors have shown that CLBS is closely related to the invariant subspace; namely, exact solutions defined on invariant subspaces for equations or their variant forms can be obtained by using the CLBS method [12][13][14][15][16][17][18][19][20][21][22][23][24]. Motivated by the form of (1), we set the following secondorder nonlinear CLBSs: which are very powerful to specify the functions appeared in (1) and construct the corresponding exact solutions. The remainder of this paper is organized as follows. In Section 2, some equations of the form (1) admitting CLBSs generated by (8) are obtained. CLBS reductions and exact solutions of two concrete examples are used to illustrate the results. Section 3 is devoted to conclusions and discussions. Equations Admitting CLBSs and Two Examples To consider further, we need the following proposition derived in [9,10]. (8) if and only if = 0 whenever satisfies (1) and = 0, where the prime denotes the Gateaux derivative, that is, Proposition 1. RDEs (1) admit CLBSs A direct computation from the above proposition yields To vanish all the coefficients of (9), we have the following overdetermined system: Solving this system, we can obtain the unknown functions in (1) and the corresponding CLBSs (8). From the first and seventh equations of system (10), it is apparent that the solutions can be divided into two cases including 1 ( ) = ( ) and 1 ( ) = 3 ( ) = 0, 1 ( ) ̸ = ( ). Example 1. Equation admits the CLBS The corresponding solutions are given by Advances in Mathematical Physics 5 where 1 ( ) and 2 ( ) satisfy the finite-dimensional dynamical system Exact solutions can be obtained as with two arbitrary constants 1 and 2 . Example 2. Equation The corresponding solutions are given by where 1 ( ) and 2 ( ) satisfy the system This dynamical system can be solved and exact solutions are with two arbitrary constants 1 and 2 . Here the solutions ( , ) preserve the forms of the separation of variables = 1 ( ) + 2 ( ) ( ), which are associated with the invariant subspace L{1, 2 } mentioned in [3] and the references therein. Conclusions and Discussions In this paper, we have discussed RDEs (1) by means of CLBS with characteristic (8). The key for this method is to determine presumably the form of the CLBS. For (1), we found that nonlinear CLBS (8) is very effective, which can yield some interesting symmetry reductions and exact solutions. Two examples are considered to illustrate this method in terms of the compatibility of CLBSs and the governing equations. Generally speaking, the obtained solutions cannot be derived within the framework of Lie's classical method and nonclassical method. In addition, it must be pointed out that, for the corresponding equations with certain fractional derivative [25], we can do similar work including CLBS classification, reductions, and exact solutions. Data Availability No data were used to support this study. Conflicts of Interest The authors declare that they have no conflicts of interest.
1,258.6
2018-07-02T00:00:00.000
[ "Mathematics" ]
Damage assessment of a titanium skin adhesively bonded to carbon fiber–reinforced plastic omega stringers using acoustic emission This study is devoted to the use of acoustic emission technique for a comprehensive damage assessment, that is, damage detection, localization, and classification, of an aeronautical metal-to-composite bonded panel. The structure comprised a titanium panel adhesively bonded to carbon fiber–reinforced plastic omega stringers. The panel contained a small initial artificial debonding between the titanium panel and one of the carbon fiber–reinforced plastic stringers. The panel was subjected to a cyclic increasing in-plane compression load, including loading, unloading, and then reloading to a higher load level, until the final fracture. The generated acoustic emission signals were captured by the acoustic emission sensors, and digital image correlation was also used to obtain the strain field on the surface of the panel during the test. The results showed that acoustic emission can accurately detect the damage onset, localize it, and also trace its evolution. The acoustic emission results not only were consistent with the digital image correlation results, but also managed to detect the damage initiation earlier than digital image correlation. Finally, the acoustic emission signals were clustered using particle swarm optimization method to identify the different damage mechanisms. The results of this study demonstrate the capability of acoustic emission for the comprehensive damage characterization of aeronautical bi-material adhesively bonded structures. Introduction With the increased awareness toward environmentally sustainable industries, especially in aviation and automotive industries, the need for fuel-efficient vehicles, and consequently less carbon dioxide (CO 2 ) emissions, becomes inevitable. There are various approaches to achieve this objective; one of which, particularly in aerospace industry, is improving the structure's aerodynamics and reducing the drag. This can be achieved by creating a laminar boundary layer flow rather than the conventional turbulent boundary layer flow. Srinivasan and Bertram 1 reported that approximately 50% of the total aircraft drag during cruise is due to friction drag, which is almost 10 times lower in the case of laminar flow as opposed to turbulent flow. Hybrid laminar flow control (HLFC) is one of the techniques developed to delay the transition from a laminar to turbulent boundary layer flow. HLFC can be obtained by integrating suction areas in the leading edges of the wing, for instance. 2 A promising structural solution is the combination of a micro-drilled outer titanium surface adhesively bonded with an inner composite structure. 1 However, in the case of shear loading or axial compression or a combination of both, the stiffened titanium-to-composite panels are susceptible to buckling failure. Thus, the proper understanding and investigation of the buckling and post-buckling behavior of such structures are inevitable. In the open literature, the response of open cross-sectional stiffeners, such as I-, C-, and T-stringers, was extensively discussed, [3][4][5][6][7][8][9][10][11][12][13] while less attention 1,[14][15][16] was drawn to closed cross-sections such as omega shape stringers, although they exhibit higher bending and torsional stiffness 16 when connected to the skin. Regardless of the stringer's choice and as can be anticipated, different damage mechanisms may occur in the stiffened panel, such as the skin failure, the stringer failure, and the debonding of the stringer-skin interface. The debonding is usually treated as the most critical damage type. This damage is usually invisible or barely visible, but it can considerably affect the integrity and stability of the structure, and finally leads to a catastrophic failure. 17 Therefore, in situ monitoring of the damage is essential to provide a reliable and safe aeronautical stiffened structure. Some non-destructive testing (NDT) techniques, such as guided wave and ultrasonic scan (UT), have been already used for damage detection in the stiffened panels. [17][18][19][20][21][22] However, these techniques are time-consuming and can only be carried out offline. Structural health monitoring (SHM) proposes continuous monitoring of the integrity of the structure by employing different techniques that have the capability of in situ monitoring, such as acoustic emission (AE) and fiber optic sensor. [23][24][25][26][27][28][29] Da´vila and Bisagni 30 performed a multi-instrumented compression fatigue test on the single-stringer-stiffened carbon fiber-reinforced plastic (CFRP) panels. They used UT, passive thermography, high-speed camera, and digital image correlation (DIC) to detect the propagation of the artificial debonding and to track the sequence of damage mechanisms up to the final fracture. The post-buckling deformation was captured by DIC. The passive thermography detected the onset of the artificial debonding growth and UT precisely sized it. The high-speed camera highlighted that the instantaneous final fracture occurred due to the stringer-skin debonding followed by the stringer crippling. The utilized techniques have some limitations that restrict employing them for the monitoring of a real structure. For example, as aforementioned, UT could not be used as an online monitoring technique, and the inspection area of thermography and DIC techniques was not wide enough to monitor the real large structures. Vanniamparambil et al. 31 used AE and ultrasonic guided wave to detect the debonding onset and also to track its evolution at the spar-skin interface of the CFRP-stiffened panel subjected to the fatigue loading. The results showed that the onset of the debonding was characterized by low-frequency and high-duration AE signals. Besides, the damage index, defined based on the recorded guided waves, was sensitive to the enlargement of the debonding area. Kolanu et al. 32 investigated the failure of a CFRP-stiffened panel subjected to the compression loading using AE, DIC, infrared thermography, and strain gauges. DIC captured the strain distributions during the buckling and post-buckling, while strain gauges could precisely detect the onset of buckling. However, because of the sudden final failure, they could not detect the initiation and propagation of the catastrophic damage properly. While the AE effectively detected and classified different damage mechanisms in the panel, the thermography images were used to verify the AE results and also to localize the delamination region in the panel. Although AE has been proven, in the open literature, as a well-established and effective tool in SHM, especially for composite structures, there is a significant gap of knowledge when it comes to three main challenges. These challenges can be summarized as follows: (1) the use of AE in SHM of bi-material structures, (2) the comprehensive damage assessment fulfilling the four levels of SHM, that is, damage initiation detection, damage classification, damage severity assessment, and damage localization, and (3) the reliability of the damage classification. Thus, in this study, a comprehensive AE-based damage assessment was performed that fulfills all the four levels of SHM for a bi-material stiffened panel resembling an aeronautical structure as a benchmark. Moreover, a robust evolutionary optimization technique was employed for the AE damage clustering that significantly increases the reliability of the SHM system by overcoming the disabilities of the commonly used clustering techniques in the literature, that is, Kmeans, fuzzy-c-means, and self-organizing map. 23,33 Materials and manufacturing Two panels were fabricated of titanium grade 2 of 0.8 mm thickness stiffened by omega CFRP stringers. The layup of the CFRP stringers was [0/45/45/0] for the inner laminate and [90/245/245/90] for the outer laminate. They were made from a five-harness weave fabric Hexforce G0926 from Hexcel with a 6K HS carbon fiber and an areal weight of 370 gsm and RTM6 resin from Hexcel. An adhesive film was used between the quasi-isotropic composite laminate and the titanium. The adhesive and RTM6 resin were cured in one cycle together to bond the titanium to the omega stringers. The foam core was made of Rohacell Hero 71-10. To create an artificial debonding, an Upilex-25S foil of 0.025 mm thickness was placed in between one of the stringers and the titanium sheet. No adhesive was used at the location of the Upilex. The panels were fabricated by the vacuum-assisted resin transfer molding (VARTM) technique. The layup of the panels was done on a flat oil/water-heated mold with the titanium sheets toward the mold. Then, the panels were sealed by the vacuum bag and the resin was injected at a temperature of 80°C. The VARTM process was designed to achieve a 57% fiber volume fraction. After the injection process, the curing cycle was started. The panel was cured at a temperature of 180°C for 1.5 h. Afterward, two resin-loading blocks were casted at the ends of the panel. The final length of the panel was milled to 270 mm after casting of the ends. The final dimensions of the panels are depicted in Figure 1. The ultrasonic C-scan image of both panels is shown in Figure 2. Compression testing The compression load was applied to the panels by an MTS 3500 kN hydraulic universal tensile/compression machine. The tests were performed under displacement control mode with a rate of 0.2 mm/min. The first test was a monotonic test to determine the maximum load and displacement expected. Then, the second test was a cyclic ''loading/unloading'' test designed based on the data collected from the monotonic test. The load and displacement values were recorded during the tests by the machine. Four AE sensors were placed on the panel surface, at the positions shown in Figure 3, to record and localize the originated AE signals during the compression test. In addition, DIC system was facing the titanium surface during the loading process to capture the displacement and strain fields on the panel surface. The lateral cross-section of the panel was continuously monitored using a digital camera. DIC Three-dimensional (3D) DIC system was calibrated and used to capture the displacement contour map during the test. The DIC system, used for the full-field strain measurement, consisted of two 8-bit ''Point Grey'' cameras with ''XENOPLAN 1.4/23'' lenses. Both cameras had a resolution of 5 MP. ViC-Snap 8 software was used to record the speckle pattern images with an acquisition rate of 0.33 frames per second (fps) for the monotonic test and 0.25 fps for the cyclic test. Afterward, the acquired images were processed using ViC-3D 8 software. For processing, the subset size was set to 29 pixels with a step size (distance between subsets) of 7 pixels. The observation window of approximately 240 3 230 mm 2 produced an image with dimensions of 2048 3 1194 pixels. AE The AE events of the panel were captured by four AE sensors placed on the panel surface. As shown in Figure 3, the sensors were placed close to the four corners of the panel to get a wider inspection area. The utilized AE sensors are broad-band piezoelectric, AE1045S-VS900M, with an external 34 dB preamplifier. An eight-channel AE system, AMSY-6 (Vallen Systeme GmbH), was employed for the AE measurements. The sampling rate and the threshold were set to 2 MHz and 40 dB, respectively. Ultrasonic gel was applied between the sensor and the panel surface to ensure a good coupling. The pencil lead break procedure 34 was performed before the test to check the performance and reproducibility of the AE system. Particle swarm optimization Data clustering can be considered as an optimization process in which an objective function that simultaneously considers similarities of the data points belongs to the same cluster and the dissimilarities of the data points belong to different clusters should be optimized. Some of the frequently used clustering methods for the AE data, such as K-means, fuzzy-c-means, and selforganizing map, may get stuck at a local minimum and do not converge to the best solution, especially for complex datasets. 23,33 Evolutionary algorithms are a kind of population-based random search that is inspired by social behaviors in which the members interact locally with each other and with their environment simultaneously. The main advantage of evolutionary algorithms is the fact that they explore the response space in parallel and in different directions. Therefore, they will less likely be stuck in a local minimum, even for complex datasets. The particle swarm optimization (PSO) is one of the most popular evolutionary algorithms that simulates the social behavior of bird flocking. It is an iterative method to optimize an objective function by moving a population of candidate solutions, particles, in the solution domain by adjusting the position and the velocity of the particles. The movement of each particle is controlled by two factors simultaneously: its local optimum and also the optimum solution found by other particles. In this way, all the particles will gradually move toward the global best solution. Flowchart of the PSO algorithm is depicted in Figure 4. According to the flowchart, the best solution is found in the following steps. 35 For a particle i with a position x(i): 1. The algorithm creates N random particles, N = Swarm Size, within the variable limits, [VarMin VarMax] (see Table 1). 2. It assigns zero to the initial velocity of all the particles. 3. Finding the best objective function among the other particles and the position of that particle (g). 4. Updating the velocity of particle i using equation where v is the velocity of particle i. The term (p 2 x) indicates the difference between the current position and the best position which has been ever found for particle i. The term (g 2 x) indicates the difference between the current position of particle i and the best position which has been ever found by the other particles. w, c 1 , and c 2 are inertia, personal learning, and global learning weight factors, respectively, and r 1 and r 2 are uniformly (0,1) distributed random vectors. 5. The velocity should be in the predefined velocity limits (see Table 1). If it is lower than VelMin, it is set as VelMin, or if it is larger than VelMax, it is set as VelMax. 6. The position of particle i is updated as x = x + v. 7. Checking the interference of the new position of particle i with the solution domain's boundaries. If it is outside a bound, it is considered equal to that bound, and if its velocity is outside the bound, the velocity is mirrored toward the boundaries. 8. The objective function is calculated as (f = fun(x)). 9. If the calculated objective function is less than the best objective function which has been ever found by particle i (f\fun(p)), then set p = x. This step guarantees that p always represents the best position particle i has had. 10. In this step, the algorithm calculates the best objective function over the entire particles in the swarm, b = min(f j ð Þ). If f \ b, then set b = f and d = x, which means b and d are always the best objective function and the best location in the swarm. 11. If the stopping criterion is satisfied, then algorithm is terminated; otherwise it goes to Step (4). The maximum number of iterations was considered as the stopping criterion in this study. The pre-set parameters of the algorithm used in this study are reported in Table 1. As Clerc and Kennedy 36 recommended, if the weight factors of w, c 1 , and c 2 , are calculated using equations (5)-(7), the best balance between exploration and exploitation throughout the response domain is achieved, which finally leads to the finding of the best solution. Specifically, the best situation is [ 1 = [ 2 = 2:05, which consequently leads to w = 0:7298 and c 1 = c 2 = 1:4962 36 w = x ð5Þ Results and discussion Mechanical results Figure 5 shows the load-displacement curves for the two tested panels, Panel 1 and Panel 2. To obtain the maximum load of the panel, the first panel was subjected to a quasi-static monotonic compression load until the final fracture. As it is depicted in Figure 5, the maximum load of the panel was ;250 kN. Then, the second panel was subjected to an increasing cyclic load with the load step of 50 kN, including loading, unloading, and then reloading to a higher load level, until the final fracture. The gradient of the load-displacement curve in each loading-unloading cycle is the same until the end of the third load cycle, while, from the fourth load cycle, there is a reduction in the slope of the unloading part in comparison with the loading part of the cycle and a hysteresis area can be seen in the curve. This phenomenon may be attributed to the titanium plastic deformation or damage propagation in the panel. The maximum displacement and failure load are approximately the same for both cases: ;2 mm and 250 kN, respectively. In situ monitoring results In the SHM paradigm, the damage is fully characterized in four levels: damage initiation detection, damage severity, damage localization, and damage type identification. Figure 6 shows the proposed workflow for the damage assessment. Based on the presented workflow, the damage initiation is first detected by means of the cumulative AE curve. Then, in the second level, the damage severity is assessed using the Felicity effect. Afterward, the damage AE signals are localized using the time difference of arrival technique. Finally, the damage type is identified by following several steps which will be fully described in section ''Damage mechanisms identification.'' Damage initiation and severity detection. Because of the good consistency between the load-displacement curve of the monotonic and cyclic tests, hereafter, the results are just presented for the cyclic test which easily enables the investigation of the damage evolution in more detail. The cumulative AE events curve during the six load cycles is presented in Figure 7. As it is visible, the first AE event occurred at the end of the third load cycle. In the fourth load cycle, as long as the load is less than the maximum load of the third load cycle, no AE event is detected. Once the load exceeds the maximum load of the third load cycle, a small jump in the cumulative curve occurs. The same trend is observed in the fifth load cycle, that is, no AE event occurs till the load exceeds the maximum load of the previous load cycle. However, at the end of the fifth load cycle, a very significant jump happened, which may indicate a severe damage in the panel as discussed later in this section. Ultimately, at the end of the sixth load cycle, another big jump in the cumulative curve is visible, which corresponds to the final fracture of the panel. The digital camera placed at the lateral side of the panel did not show any debonding or CFRP stringer failure at the edge before the final fracture of the panel which is consistent with the similar results reported in the literature. 30 For precisely detecting the damage initiation, the Kaiser and Felicity effects 37,38 are used. The Kaiser effect, introduced by Kaiser in the 1950s, is a method for evaluating the damage state in a structure. According to Kaiser's principle, once a structure is loaded up to a load higher than the damage-inducing threshold, it generates AE events, while if the structure is unloaded and reloaded again, it does not generate AE events anymore until the load crosses the maximum load of the previous cycle. This phenomenon indicates that the structure has not been degraded significantly because of the induced damage. However, in the reloading cycle, if AE events occur at a load level lower than the maximum load of the previous cycle, this is called ''Felicity effect.'' This indicates that severe/critical damage occurred in the structure which significantly degraded its integrity. To calculate the Kaiser and Felicity effect, in each load cycle, the load corresponding to the initiation of the significant AE activities is divided by the maximum load of the previous load cycle. As long as this ratio is equal to or greater than 1, the Kaiser effect prevails which indicates no critical damage occurrence. When the ratio drops below 1, the Felicity effect prevails which corresponds to considerable damage in the panel. In other words, the Kaiser effect can be considered as a special case of the Felicity effect (when the Felicity effect is 1). It is clear that the lower the value of the Felicity effect, the more the severity of the damage in the structure. Figure 8 depicts the Kaiser and Felicity effects in the cyclic test. Because there is no AE event in the first two cycles, the ratio cannot be defined for them. As it is clear, the ratio for the third, fourth, and fifth load cycles is greater than or equal to 1 which shows the Kaiser effect; however, it drops to ;0.9 at the beginning of the sixth load cycle, which shows the Felicity effect. Therefore, it indicates that the panel was severely damaged during the fifth load cycle, and this finally led to the catastrophic failure in the sixth load cycle. Damage localization. The next stage of the damage characterization is localizing the damage. Thus, the AE events captured by the AE sensors were localized by the time difference of arrival technique. 39 To avoid considering the reflected waves from the boundaries as the damage hit, two time-related parameters were considered: (1) first hit discrimination time (FHD) and (2) maximum time difference (MTD) between the first and last hits of the same event. Once, the first hit of a new event is recorded by a sensor, all the hits arriving at the other sensors less than MTD are considered as the same event dataset. By expiring MTD, the events dataset is closed. For a new event, a new dataset is opened and the first hit is recorded, if the time to the previous hit is larger than FHD. An event is localized if at least three out of four sensors record its corresponding hits. In the tested panel, by considering the farthest possible event from one of the AE sensors (the diagonal of the sensor grid in Figure 3 is ;0.3 m) and the measured wave velocity in the titanium panel (4950 m/s), the MTD of arrival for the farthest sensor from the event source is ;60 ms (0.3 m over 4950 m/s). Therefore, the MTD was set as 60 ms to avoid considering the reflected waves as the damage hit. In addition, to give enough time to the panel to damp the reflected waves of an event from the boundaries, FHD was set as 1 ms. The density plot of the localized events is shown in Figure 9. The initial location of the artificial debonding is highlighted by the hatched rectangle. As it is clear, no AE event is detected in the first two cycles. At the end of the third load cycle, the first AE event was localized close to the artificial debonding. Afterward, the density of the AE events considerably increased at the end of the fourth load cycle, and they were mostly located in the vicinity of the artificial debonding. At the end of the fifth load cycle, a new dense-AE events group was localized at the right side of the panel, which indicates the occurrence of a new damage at this region. Finally, in the density plot of the end of the sixth load cycle, another increase in the density function around the artificial debonding was observed, which emphasizes that the damage mostly propagated around the artificial debonding during this load cycle. To verify the AE localization results, the DIC was also employed to trace the damage evolution in the panel during six load cycles. Figure 10 summarizes the out-of-plane displacement of the titanium panel obtained from DIC. Almost a uniform displacement distribution is observed on the panel's surface for the first three cycles. Starting from the end of the fourth load cycle, an out-of-plane concentration appeared at the location of the artificial debonding, which is consistent with the AE localization results (see Figure 9). This concentration gets magnified in the fifth load cycle. At the sixth load cycle, another out-of-plane concentration is visible at the right side of the panel, which is in agreement with the AE localization results for the fifth and sixth load cycles. Although the DIC results are consistent with the AE results, it has a one-cycle delay in comparison to AE. This is due to the fact that AE detects the debonding propagation, while the increase in the out-of-plane displacement, which is detected by DIC, is a consequence of this debonding propagation. Damage mechanisms identification. As depicted in Figure 6, the damage clustering using AE signals is done in seven steps: (1) feature extraction, (2) feature selection, (3) data dimensionality reduction using Principal Component Analysis (PCA), (4) finding the optimum number of clusters, (5) AE data clustering using the PSO algorithm, (6) assigning each AE cluster to the corresponding damage mechanism, and (7) validation of the damage clustering results. The details of each step will be discussed later. Twelve AE features, which have been mostly used in the literature, were extracted for the AE signals and they are presented in Table 2. The upper and lower limits of the features were specified based on the AE dataset of the cyclic test. In the feature selection step, the features with the highest discriminating capability should be selected among all the available AE features because these features will lead to a larger spread of the data. As it is clear from Table 2, because the features are in different units and also their ranges are completely different, the variation of the raw data is not comparable. Therefore, each feature is first being scaled to a range of [0, 1] by dividing all the feature values by the maximum value of that feature. Then, a descriptive statistical analysis is performed using the box plot. In this way, five main parameters are determined including: the median, the first quartile, the third quartile, and the data min and max. This ensures that all the data are compared with the same confidence interval of 99.3% as depicted in Figure 11. Accordingly, the top five features with the highest discriminating capability, that is A, FFT_CoG, RMS, FFT_FoM, and R/D, are selected for the rest of the analysis. The five selected features are then analyzed using the PCA method to reduce the dimensionality of the dataset for easier data manipulation and analysis. The PCA creates new independent variables (principal components), made as a linear function of the initial variables, that maximize variance (increasing data discrimination potential). In the PCA, most of the initial variables information will be put in the first components. More details on the PCA method can be found in Pashmforoush et al. 40 In this study, because the initial data dimensionality was five (five features including A, FFT_CoG, RMS, FFT_FoM, and R/D), PCA resulted in five principal components in which most information is found in the first component, then in the second, and so forth. Figure 12 shows the first two principal components of the AE signals of the panel (PCA 1 and PCA 2) that provide the highest discrimination for the AE dataset. In the case of supervised classification, the number of classes is known from the training dataset beforehand. On the contrary, in the case of unsupervised clustering, finding the optimum number of clusters is a challenge. Before clustering the data, the optimum Figure 11. The results of a descriptive statistical analysis for the AE data of the cyclic test. number of clusters was obtained using Davies-Bouldin, silhouette, and Calinski-Harabasz criteria. All these three methods find the optimum number of clusters by performing an iterative optimization process. In the case of the Calinski-Harabasz criterion, the objective function is defined as a function of the ratio of the between-cluster variance to the within-cluster variance. The best response would be found in the case of the largest between-cluster variance and the smallest within-cluster variance. In the case of the silhouette criterion, it tries to maximize an objective function that indicates the similarity of one data point to its own cluster. It varies from 21 to + 1, in which the higher the value, the better the response. Finally, the Davies-Bouldin criterion almost does the opposite of the Calinski-Harabasz criterion, and its objective function is defined as the ratio of within-cluster to betweencluster distances. Therefore, in this criterion, the lower the value, the better the response. The details of these criteria can be found in Calinski and Harabasz, 41 Davies and Bouldin, 42 and Rousseeuw. 43 In conclusion, the highest value of Calinski-Harabasz and silhouette indices and the lowest value of the Davies-Bouldin index indicate an optimum number of clusters. Therefore, as depicted in Figure 13 for the present AE dataset, the optimum number of clusters for the AE dataset is 4. Afterward, the PSO algorithm was used to cluster the AE data into four clusters by minimizing the following objective function (within-cluster distance (WCD)) where x indicates a data point, k is the number of clusters, C j is cluster j, m j represents the centroid of cluster j, d is the Euclidean distance, and n denotes the total number of data. The stopping criterion of the algorithm was the maximum number of iterations, which was set as 200. The clustered data and the values of the objective function in each iteration are summarized in Figure 14. After ;30 iterations, the PSO algorithm converged to the best clustering solution. The cumulative number of events for each cluster during the cyclic test is illustrated in Figure 15. The first cluster, which started at the end of the third load cycle, is cluster 4. Then, clusters 2 and 3 initiated at the end of the fourth load cycle. Cluster 1 is the last one starting at the end of the fifth load cycle. The load level corresponding to the initiation of each cluster is reported in Table 3. To correlate these AE clusters to the associated damage mechanisms, first, the clusters were localized. Then, Figure 14. the location of each cluster was compared with the trace of the different damages on the fracture surface of the panel. As shown in Figure 16, cluster 4 was mainly located close to the artificial debonding, while cluster 1 was mostly located at the right side of the panel, far from the artificial debonding. Clusters 2 and 3 were almost distributed in the whole area of the panel. To find the damage mechanisms associated with these clusters, the damaged panel was cut from the region shown in Figure 17(a). The images of the damaged CFRP-titanium interface are depicted in Figure 17(b). The dominant damage mechanism at the left side of the panel, where the artificial debonding exists, is the separation of the adhesive layer from CFRP or titanium, ''adhesive failure.'' The adhesive material which only remained on one side of the titanium-CFRP interface is an evidence of the adhesive failure, while, at the right side of the panel, the dominant damage mechanism is the damage within the adhesive layer, ''cohesive failure.'' The trace of the adhesive material on both surfaces of the CFRP and the titanium is a signature of the cohesive failure mode. Therefore, the AE cluster 4 which was mainly located at the left side of the panel is correlated to the adhesive failure and cluster 1 which mostly occurred at the right side of the panel is allocated to the cohesive failure. To determine the source of clusters 2 and 3, which were distributed in a wider area, the longitudinal strain of the titanium panel during the six load cycles was calculated by the DIC and it is plotted in Figure 18. According to the titanium datasheet, the yielding strain of the titanium panel is ;0.25%. From the strain curve of the panel (see Figure 18), it is clear that up till the end of the third load cycle, the strain value is less than the yield strain of the titanium, while in the fourth load cycle, the strain exceeds the titanium's yielding strain. This moment is almost consistent with the initiation of clusters 2 and 3 in the fourth load cycle in Figure 15. Thus, one or both of clusters 2 and 3 can be attributed to the titanium yielding. To find out the similarity of these two clusters and the titanium yielding signals, the standard dog bone test sample was fabricated out of the titanium panel and it was subjected to the quasi-static tensile test, while its AE activities were recorded by the AE sensor. The similarity_index was defined to calculate the similarity of the titanium yielding signals and clusters 2 and 3 where C i indicates cluster i, n i is the number of data points of cluster i, and d(x i , x j ) determines the distance between two data points x i and x j . d i indicates the average of internal distances of data points inside cluster i, while d o shows the average of the external distance of data points of cluster i from the other clusters' data points. Here, C i denotes the titanium yielding AE dataset and C j represents clusters 2 and 3. Each data point, x, was defined by five features, including A, FFT_CoG, RMS, FFT-FoM, and R/D. The similarity_index value of 1 indicates that there is no similarity between the titanium yielding signals and clusters 2 and 3, while an index value close to 0 or even a negative value indicates a huge similarity between the titanium yielding signals and these two clusters. The similarity_index values for clusters 2 and 3 were 0.1441 and 0.0109, respectively, which indicates a high similarity between the titanium yielding signals and both clusters 2 and 3. Therefore, as both clusters started at the same time and also they had a very small similarity_index values with the titanium yielding signals, both clusters 2 and 3 are allocated to the titanium yielding. The camera placed at the side of the panel did not show any damage in the CFRP stringers up to the moment of the final failure of the panel which is consistent with the literature. 30 Therefore, no AE cluster is dedicated to the CFRP failure. In summary, clusters 4 and 1 were allocated to the adhesive failure and cohesive failure, respectively, by mapping the localized clusters to the fractography image of the damaged panel. Regarding clusters 2 and 3, they started simultaneously at the fourth load cycle while the DIC indicated yielding of the titanium panel. Moreover, the proposed similarity index for the AE signals obtained from the tensile test of a titanium sample, from one side, and both clusters 2 and 3, from the other side, revealed a high similarity. Therefore, both clusters 2 and 3 were dedicated to the titanium yielding. Conclusion This study was devoted to the SHM of an aeronautical titanium panel stiffened by the omega shape CFRP stringer using AE and DIC techniques. Two panels with a small artificial debonding between the titanium sheet and one of the CFRP stringers were fabricated. The first panel was subjected to a quasi-static monotonic compression load to find the maximum load, which was ;250 kN, and accordingly, the second one was subjected to an increasing cyclic load with the load step of 50 kN up to final fracture. The AE was used to comprehensively characterize the damage, that is, damage initiation detection, damage severity, damage localization, and damage type identification. The concluding remarks are summarized as follows: The AE analysis enabled the distinction between the damage initiation and damage severity. Despite the fact that damage initiation occurred at the end of the third cycle, it was not severe enough to affect the integrity of the structure. This was confirmed by the Felicity analysis highlighting the occurrence of the severe damage only at the end of the fifth cycle. 2. The localized AE events on the panel surface were consistent with the regions with the highest out-ofplane displacement highlighted by DIC. First, the damage started around the artificial debonding and then it propagated to the other side of the panel which resulted in the catastrophic failure of the panel at the end. 3. Comparing the AE results with the DIC results revealed that although both techniques detected the damage, the DIC detected the damage one cycle later than AE. This is due to the fact that the AE detected the damage propagation, while DIC detected the consequences of this damage, which is the increase in out-of-plane displacement in this case. 4. Finally, five features with the highest discrimination capability were selected to cluster the AE signals into several clusters using PSO algorithm. The obtained clusters were then assigned to their associated damage mechanisms, that is, adhesive failure, cohesive failure, and titanium yielding. The obtained results demonstrated the potential of AE, as an SHM technique, for the monitoring of the integrity of the aeronautical composite-to-metal adhesively bonded structures. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the Clean Sky 2 Joint Undertaking under the European Union's Horizon 2020 program TICOAJO (Grant No. 737785).
8,608.6
2021-03-25T00:00:00.000
[ "Materials Science" ]
Polarization-based Tests of Gravity with the Stochastic Gravitational-Wave Background The direct observation of gravitational waves with Advanced LIGO and Advanced Virgo offers novel opportunities to test general relativity in strong-field, highly dynamical regimes. One such opportunity is the measurement of gravitational-wave polarizations. While general relativity predicts only two tensor gravitational-wave polarizations, general metric theories of gravity allow for up to four additional vector and scalar modes. The detection of these alternative polarizations would represent a clear violation of general relativity. The LIGO-Virgo detection of the binary black hole merger GW170814 has recently offered the first direct constraints on the polarization of gravitational waves. The current generation of ground-based detectors, however, is limited in its ability to sensitively determine the polarization content of transient gravitational-wave signals. Observation of the stochastic gravitational-wave background, in contrast, offers a means of directly measuring generic gravitational-wave polarizations. The stochastic background, arising from the superposition of many individually unresolvable gravitational-wave signals, may be detectable by Advanced LIGO at design-sensitivity. In this paper, we present a Bayesian method with which to detect and characterize the polarization of the stochastic background. We explore prospects for estimating parameters of the background, and quantify the limits that Advanced LIGO can place on vector and scalar polarizations in the absence of a detection. Finally, we investigate how the introduction of new terrestrial detectors like Advanced Virgo aid in our ability to detect or constrain alternative polarizations in the stochastic background. We find that, although the addition of Advanced Virgo does not notably improve detection prospects, it may dramatically improve our ability to estimate the parameters of backgrounds of mixed polarization. The direct observation of gravitational waves with Advanced LIGO and Advanced Virgo offers novel opportunities to test general relativity in strong-field, highly dynamical regimes. One such opportunity is the measurement of gravitational-wave polarizations. While general relativity predicts only two tensor gravitational-wave polarizations, general metric theories of gravity allow for up to four additional vector and scalar modes. The detection of these alternative polarizations would represent a clear violation of general relativity. The LIGO-Virgo detection of the binary black hole merger GW170814 has recently offered the first direct constraints on the polarization of gravitational waves. The current generation of ground-based detectors, however, is limited in its ability to sensitively determine the polarization content of transient gravitational-wave signals. Observation of the stochastic gravitational-wave background, in contrast, offers a means of directly measuring generic gravitational-wave polarizations. The stochastic background, arising from the superposition of many individually unresolvable gravitational-wave signals, may be detectable by Advanced LIGO at design-sensitivity. In this paper, we present a Bayesian method with which to detect and characterize the polarization of the stochastic background. We explore prospects for estimating parameters of the background, and quantify the limits that Advanced LIGO can place on vector and scalar polarizations in the absence of a detection. Finally, we investigate how the introduction of new terrestrial detectors like Advanced Virgo aid in our ability to detect or constrain alternative polarizations in the stochastic background. We find that, although the addition of Advanced Virgo does not notably improve detection prospects, it may dramatically improve our ability to estimate the parameters of backgrounds of mixed polarization. The measurement of gravitational-wave polarizations represents another avenue by which to test general relativity. While general relativity allows for the existence of only two gravitational-wave polarizations (the tensor plus and cross modes), general metric theories of gravity may allow for up to four additional polarizations: the x and y vector modes, and the breathing and longitudinal scalar modes [8,9,11]. The effects of all six polarizations on a ring of freely-falling test particles are shown in Fig. 1. The detection of these alternative polarization modes would represent a clear violation *<EMAIL_ADDRESS>of general relativity, while their non-detection may serve to experimentally constrain extended theories of gravity. Few experimental constraints exist on the polarization of gravitational waves [12]. Very recently, though, the simultaneous detection of GW170814 with the Advanced LIGO and Virgo detectors has allowed for the first direct study of a gravitational wave's polarization [7,15]. When analyzed with models assuming pure tensor, pure vector, and pure scalar polarization, GW170814 significantly favored the purely-tensor model over either alternative [7,15]. This result represents a significant first step in polarization-based tests of gravity. Further tests with additional detectors, though, will be needed to sensitively test general relativity and its alternatives. In particular, many alternate theories of gravity yield signals of mixed polarizations, yielding vector and/or scalar modes in addition to standard tensor polarizations. When allowing generically for all six polarization modes, the threedetector Advanced LIGO-Virgo network is generally unable to distinguish the polarization of transient gravitational-wave signals, like those from binary black holes [5, 10-12, 15, 16]. First, two LIGO detectors are nearly co-oriented, leaving Advanced LIGO largely sensitive to only a single polarization mode [5,7,11,12]. Second, even if the LIGO detectors were more favorably-oriented, a network of at least six detectors is generically required to uniquely determine the polarization content of a gravitational-wave transient [10,11,17]. arXiv:1704.08373v2 [gr-qc] 18 Oct 2017 Some progress can be made via the construction of "nullstreams" [17], but this method is infeasible at present without an independent measure of a gravitational wave's source position (such as an electromagnetic counterpart). Future detectors like KAGRA [18] or LIGO-India [19] will therefore be necessary to break existing degeneracies and confidently distinguish vector or scalar polarizations in gravitational-wave transients. It should be noted that the scalar longitudinal and breathing modes induce perfectly-degenerate responses in quadrupolar detectors like Advanced LIGO and Virgo. Thus a network quadrupolar detectors can at most measure five independent polarization degrees of freedom [11,15,17]. Beyond the direct detection of binary coalescences, another target for current and future detectors is the observation of the astrophysical stochastic gravitational-wave background, formed via the superposition of all gravitational-wave sources that are too weak or too distant to individually resolve [16,[20][21][22][23][24]. Although the strength of the background remains highly uncertain, it may be detected by Advanced LIGO in as few as two years of coincident observation at design-sensitivity [24][25][26]. Unlike direct searches for binary black holes, Advanced LIGO searches for long-lived sources like the stochastic background and rotating neutron stars [16,[27][28][29][30][31][32] are currently capable of directly measuring generic gravitational-wave polarizations without the introduction of additional detectors or identification of an electromagnetic counterpart. The observation of the stochastic background would therefore enable novel checks on general relativity not possible with transient searches using the current generation of gravitational-wave detectors. In this paper, we explore the means by which Advanced LIGO can detect and identify alternative polarizations in the stochastic background. First, in Sect. II, we consider possible theorized sources which might produce a background of alternative polarizations. We note, though, that stochastic searches are largely unmodeled, requiring few assumptions about potential sources or theories giving rise to alternative polarization modes (see, however, Sect. VI). In Sect. III we discuss the tools used for detecting the stochastic background and compare the efficacy of standard methods with those optimized for alternative polarizations. In Sect. IV we then propose a Bayesian method with which to both detect generically-polarized backgrounds and determine if alternative polarization modes are present. Next, in Sect. V we explore prospects for estimating the polarization content of the stochastic background. We quantify the limits that Advanced LIGO can place on the presence of alternative polarizations in the stochastic background, limits which may be translated into constraints on specific alternative theories of gravity. As new detectors are brought online in the coming years, searches for alternative polarizations in the stochastic background will become ever more sensitive. In both Sects. IV and V, we therefore investigate how the addition of Advanced Virgo improves our ability to detect or constrain backgrounds of alternative polarizations. Finally, in Sect. VI we ask if our proposed search is robust against unexpectedly complex backgrounds of standard tensor polarizations. II. EXTENDED THEORIES OF GRAVITY AND ALTERNATIVE POLARIZATION MODES Searches for the stochastic background are largely unmodeled, making minimal assumptions about the source of a measured background. Nevertheless, it is interesting to consider which sources might give rise to a detectable background of alternative polarization modes. In this section we briefly consider several possibilities that have been proposed in the literature. We will focus mainly on scalar-tensor theories, which predict both tensor and scalar-polarized gravitational waves [33]. Our discussion below is not meant to be exhaustive; there may well exist additional sources that can give rise to backgrounds of extra polarization modes. In particular, we do not discuss possible sources of vector modes, predicted by various alternative theories of gravity (see Ref. [27] and references therein). Note that, while advanced detectors may not be sensitive to the sources described below, these sources may become increasingly relevant for third generation detectors (or beyond). Core-collapse supernovae (CCSNe) represent one potential source of scalar gravitational waves. Although sphericallysymmetric stellar collapses do not radiate gravitational waves in general relativity, they do emit scalar breathing modes in canonical scalar-tensor theories. While the direct observation of gravitational waves from CCSNe is expected to place strong constraints on scalar-tensor theories [34], only supernovae within the Milky Way are likely to be directly detectable using current instruments [35,36]. Such events are rare, occurring at a rate between (0.6 − 10.5) × 10 −2 yr −1 [37]. The stochastic gravitational-wave background, on the other hand, is dominated by distant undetected sources, and so in principle it is possible that a CCSNe background of breathing modes could be detected before the observation of a single Galactic supernova [38,39]. However, realistic simulations of monopole emission from CCSNe predict only weak scalar emission [34]. Nevertheless, certain extreme phenomenological supernovae models predict gravitational radiation many orders of magnitude stronger than in more conventional models [35]. According to such models, CCSNe may contribute non-negligibly to the stochastic background. Compact binary coalescences may also contribute to a stochastic background of scalar gravitational waves. In many scalar-tensor theories, bodies may carry a "scalar charge" that sources the emission of scalar gravitational waves [40,41]. Monopole scalar radiation is suppressed due to conservation of scalar charge, but in a general scalar-tensor theory there is generally no conservation law suppressing dipole radiation. Scalar dipole radiation from compact binaries is enhanced by a factor of (v/c) −2 relative to ordinary quadrupole tensor radiation (where v is the orbital velocity of the binary and c the speed of light), and thus represents a potentially promising source of scalar gravitational waves. Electromagnetic observations of binary neutron stars place stringent constraints on anomalous energy loss beyond that predicted by general relativity; these constraints may be translated into a strong limit on the presence of additional scalar-dipole radiation [42,43]. Such limits, though, are strongly model-dependent, assuming a priori only small deviations from general relativity. Additionally, pure vacuum solutions like binary black holes are not necessarily subject to these constraints. If, for example, the scalar field interacts with curvature only through a linear coupling to the Gauss-Bonnet term, scalar radiation is produced by binary black holes but not by binary neutron stars [44,45]. Alternatively, binary black holes can avoid the nohair theorem and obtain a scalar charge if moving through a time-dependent or spatially-varying background scalar field [46,47]. A variety of exotic sources may generically contribute to stochastic backgrounds of alternative polarizations as well. Cosmic strings, for instance, generically radiate alternative polarizations in extended theories of gravity and may therefore contribute extra polarization modes to the stochastic gravitational-wave background [48,49]. Another potential source of stochastic backgrounds of alternative polarizations are the so-called "bubble walls" generated by first order phase transitions in the early Universe [50][51][52]. In scalar-tensor theories, bubbles are expected to produce strong monopolar emission [40]. Gravitational waves from bubbles are heavily redshifted, though, and today may have frequencies too low for Advanced LIGO to detect [51]. Bubble walls may therefore be a more promising target for future space-based detectors like LISA than for current ground-based instruments. Finally, we note that it is also possible for alternative polarizations to be generated more effectively from sources at very large distances. There are several ways in which this might occur. First, modifications to the gravitational-wave dispersion relation can lead to mixing between different polarizations in vacuum (an effect analogous to neutrino oscillations). This can cause mixing between the usual tensor modes [53], and also between tensor modes and other polarizations [54,55]. Thus alternative polarizations can be generated during propagation, even if only tensor modes are produced at the source. This effect would build with the distance to a given gravitational-wave source. Such behavior is among the effects arising from generic Lorentz-violating theories of gravity [56,57]. While birefringence and dispersion of the standard plus and cross modes have been explored observationally in this context [57,58], the phenomenological implications of additional polarization modes remain an open issue at present. Secondly, in many alternative theories fundamental constants (such as Newton's constant G) are elevated to dynamical fields; these fields may have behaved differently at earlier stages in the Universe's evolution [59,60]. As a consequence, local constraints on scalar emission may not apply to emission from remote sources. Additionally, it is in principle possible for local sources to be affected by screening mechanisms that do not affect some remote sources [61]. III. STOCHASTIC BACKGROUNDS OF ALTERNATIVE POLARIZATIONS The stochastic background introduces a weak, correlated signal into networks of gravitational-wave detectors. Searches for the stochastic background therefore measure the cross-correlationĈ between the strains 1 (f ) ands 2 (f ) measured by pairs of detectors (see Ref. [16] for a comprehensive review of stochastic background detection methods). We will make several assumptions about the background. First, we will assume that the stochastic background is isotropic, stationary, and Gaussian. Second, we assume that there are no correlations between different tensor, vector, and scalar polarization modes. We can therefore express the total measured cross-power Ĉ (f ) as a sum of three terms due to each polarization sector. Finally, we assume that the tensor and vector sectors are individually unpolarized, with equal power in the tensor plus and cross modes and equal power in the vector-x and vector-y modes. This follows from the fact that we expect gravitational-wave sources to be isotropically distributed and randomly oriented with respect to the Earth. In contrast, we cannot assume that the scalar sector is unpolarized. Scalar breathing and longitudinal modes cannot be rotated into one another via a coordinate transformation (as can the tensor plus and cross modes, for instance), and so source isotropy does not imply equal power in each scalar polarization. However, the responses of the LIGO detectors to breathing and longitudinal modes are completely degenerate, and so Advanced LIGO is sensitive only to the total power in scalar Hanford-Virgo (right) baselines to isotropic backgrounds of tensor, vector, and scalar-polarized gravitational waves. The distance between Hanford and Virgo is much larger than that between Hanford and Livingston; the Hanford-Virgo overlap reduction functions are therefore smaller in amplitude and more rapidly oscillatory. modes rather than the individual energies in the breathing and longitudinal polarizations [11,28]. The above assumptions are not all equally justifiable, and may be broken by various alternative theories of gravity. For instance, one should not expect an unpolarized background in any theory that includes parity-odd gravitational couplings, like Chern-Simons gravity [62][63][64][65], even in the absence of non-tensorial modes [66]. Furthermore, different polarizations may not be statistically independent, as is the case for the breathing and longitudinal modes in linearized massive gravity [67]. Finally, we should expect a departure from isotropy in any theory violating Lorentz invariance, like those within the standard model extension framework [53,56,57]. These exceptions notwithstanding, for simplicity we will proceed under the assumptions listed above, leaving more generic cases for future work. Under our assumptions, the measured cross-power due to the background is given by [16,28,68] where repeated indices denote summation over tensor, vector, and scalar modes (a ∈ {T, V, S}). The overlap reduction functions γ a (f ) quantify the sensitivity of detector pairs to isotropic backgrounds of each polarization [28,69] (see Appendix A for details). The functions H a (f ), meanwhile, encode the spectral shape of the stochastic background within each polarization sector. In the left side of Fig. 2, we show the overlap reduction functions for the Hanford-Livingston (H1-L1) Advanced LIGO network. The overlap reduction functions are normalized such that γ T (f ) = 1 for coincident and coaligned detectors. For the Advanced LIGO network, the tensor overlap reduction function has magnitude |γ T (0)| = 0.89 at f = 0, representing reduced sensitivity due to the separation and relative rotation of the H1 and L1 detectors. Additionally, the H1-L1 tensor overlap reduction function decays rapidly to zero above f ≈ 64 Hz. Standard Advanced LIGO searches for the stochastic background therefore have negligible sensitivity at frequencies above ∼ 64 Hz. Relative to γ T (f ), the H1-L1 vector overlap reduction function γ V (f ) is of comparable magnitude at low frequencies, but remains non-negligible at frequencies above 64 Hz. As a result, we will see that Advanced LIGO is in many cases more sensitive to vector-polarized backgrounds than standard tensor backgrounds. The scalar overlap reduction function, meanwhile, is smallest in magnitude, with |γ S (0)| a factor of three small than |γ T (0)| and |γ V (0)|. Advanced LIGO is therefore least sensitive to scalar-polarized backgrounds. This reflects a generic feature of quadrupole gravitational-wave detectors, which geometrically have a smaller response to scalar modes than to vector and tensor polarizations [32]. For an extreme example of the opposite case, see pulsar timing arrays, which are orders of magnitude more sensitive to longitudinal polarizations than standard tensor-polarized signals [70,71]. For comparison, the right side of Fig. 2 shows the overlap reduction functions for the Hanford-Virgo (H1-V1) baseline. As the separation between Hanford and Virgo is much greater than that between Hanford and Livingston, the Hanford-Virgo overlap reduction functions are generally much smaller in amplitude and more rapidly oscillatory, translating into weaker sensitivity to the stochastic background. Note, however, that the H1-V1 tensor overlap reduction function remains larger in amplitude than H1-L1's at frequencies f 200 Hz, implying heightened relative sensitivity to tensor backgrounds at high frequencies [72]. The functions H a (f ) appearing in Eq. (2) are theoryindependent; they are observable quantities that can be directly measured in the detector frame. Stochastic backgrounds are not conventionally described by H(f ), though, but by their gravitational-wave energy-density [68] defined as the fraction of the critical energy density ρ c = 3H 2 0 c 2 /(8πG) contained in gravitational waves per logarithmic frequency interval d ln f . Here, H 0 is the Hubble constant and G is Newton's constant. Within general relativity, the background's energy-density is related to H(f ) via [68] Eq. (4) is a consequence of Isaacson's formula for the effective stress-energy of gravitational waves [67,68,73]. Alternate theories of gravity, though, can predict different expressions for the stress-energy of gravitational-waves and hence different relationships between H a (f ) and Ω a (f ) [67]. For ease of comparison to previous studies, we will use Eq. (4) to define the canonical energy-density Ω a (f ) in polarization a. If we allow Isaacson's formula to hold, then Ω a (f ) may be directly interpreted as a physical energy density. If not, though, then Ω a (f ) can instead be understood as a function of the observable H a (f ). We will choose to normalize the cross-correlation statistiĉ C(f ) such that Its variance is then [28,68] σ 2 (f ) = 1 2T df Here, T is the total coincident observation time between detectors, df is the frequency bin-width considered, and P i (f ) is the noise power spectral density of detector i. Note that the normalization of our cross-correlation measurement, with the overlap reduction functions appearing in Ĉ (f ) rather than σ 2 (f ), differs from the convention normally adopted in the literature. Standard stochastic searches typically define a statis-ticŶ (f ) ∝s * 1 (f )s 2 (f )/γ T (f ), such that Ŷ (f ) = Ω T (f ) in the presence of a pure tensor background [24,74,75]. Our choice of normalization, though, will prove more convenient when studying stochastic backgrounds of mixed gravitationalwave polarizations. To emphasize this distinction, though, we denote our cross-power estimators byĈ(f ), rather than the more commonŶ (f ). A spectrum of cross-correlation measurementsĈ(f ) may be combined to obtain a single broadband signal-to-noise ratio (SNR), given by where we have defined the inner product In Eq. (7), Ω a M (f ) is our adopted model for the energy-density spectrum of the stochastic background. The expected SNR is maximized when this model is equal to the background's true energy-density spectrum. The resulting optimal SNR is given by (see Appendix B for details). Conventionally, stochastic energy-density spectra are modeled as power laws, such that where Ω a 0 is the background's amplitude at a reference frequency f 0 and α a is its spectral index (or slope) [24,68,75]. The predicted tensor stochastic background from compact binary coalescences, for instance, is well-modeled by a power law of slope α T = 2/3 in the sensitivity band of Advanced LIGO [74]. For reference, slopes of α = 0 and α = 3 correspond to scale-invariant energy and strain spectra, respectively. While we will largely assume power-law models in our analysis, in Sect. VI we will explore the potential consequences if this assumption is in fact incorrect (as would be the case, for instance, for a background of unexpectedly massive binary black holes [74]). Throughout this paper we will use the reference frequency f 0 = 25 Hz. With the above formalism in hand, we can quantify Advanced LIGO's sensitivity to stochastic backgrounds of alternative polarizations. Plotted on the left side of Fig. 3 are power-law integrated (PI) curves representing Advanced LIGO's optimal sensitivity to power-law backgrounds of pure tensor (solid blue), vector (solid red), and scalar (solid green) modes [76]. The PI curves are defined such that a power-law spectrum drawn tangent to the PI curve will be marginally detectable with SNR OPT = 3 after three years of observation with design-sensitivity Advanced LIGO. In general, energydensity spectra lying above and below the PI curves are expected to have optimal SNRs greater and less than 3, respectively. In the right side of Fig. 3, meanwhile, the solid curves trace the power-law amplitudes required for marginal detection ( SNR OPT = 3 after three years of observation) as a function of spectral index. Incidentally, the left and right-hand subplots of Fig. 3 are Legendre transforms of one another. For spectral indices α a 0, Advanced LIGO is approximately equally sensitive to tensor and vector-polarized backgrounds, with reduced sensitivity to scalar signals. When α a = 0, for instance, the minimum optimally-detectable tensor and vector amplitudes are Ω T 0 = 1.1 × 10 −9 and Ω V 0 = 1.5 × 10 −9 , while the minimum detectable scalar amplitude is Ω S 0 = 4.4 × 10 −9 , a factor of three larger. This relative sensitivity is due to the fact that the tensor and vector overlap reduction functions are of comparable magnitude at low frequencies, while the scalar overlap reduction function is reduced in size (see Fig. 2). At high frequencies, on the other hand, Advanced LIGO's tensor overlap reduction function decays more rapidly than the vector and scalar overlap reduction functions. As a result, The dashed curves show amplitudes detectable with existing "naive" methods. The sensitivity loss between optimal and naive cases is negligible for αa 0, but becomes significant at moderate positive slopes (e.g. αa ∼ 2). The kinks in the naive curves are due to biases incurred when recovering vector and scalar backgrounds with purely-tensor models; see the text for details. Advanced LIGO is more sensitive to vector and scalar backgrounds of large, positive slope than to tensor backgrounds of similar spectral shape. In Fig. 3.a, for instance, the vector and scalar PI curves are seen to lie an order of magnitude below the tensor PI curve at frequencies above f ∼ 300 Hz. The constraints that Advanced LIGO can place on positivelysloped vector and scalar backgrounds are therefore as much as an order of magnitude more stringent than those that can be placed on tensor backgrounds of similar slope. We emphasize that the Advanced LIGO network's relative sensitivities to tensor, vector, and scalar-polarized backgrounds are due purely to its geometry, rather than properties of the backgrounds themselves. If we were instead to consider the Hanford-Virgo baseline, for instance, the right-hand side of Fig. 2 shows that at high frequencies the H1-V1 pair is least sensitive to scalar polarizations, whereas the H1-L1 baseline is least sensitive to tensor modes. So far we have discussed only Advanced LIGO's optimal sensitivity to stochastic backgrounds of alternative polarizations. Existing stochastic searches, though, are not optimized for such backgrounds, instead using models Ω a M (f ) that allow only for tensor gravitational-wave polarizations. The dashed curves in Fig. 3 illustrate Advanced LIGO's "naive" sensitivity to backgrounds of alternative polarizations when incorrectly assuming a purely-tensor model. Note that the "naive" curves on the right side of Fig. 3 are not smooth, with sharp kinks at α a ∼ 2; more on this below. The loss in sensitivity between the optimal and naive searches varies greatly with different spectral indices. Sensitivity loss is relatively minimal for slopes α a 0. When α S = 0, for example, the minimum detectable scalar amplitude rises from Ω S 0 = 4.4 × 10 −9 in the optimal case to 5.3 × 10 −9 in the naive case, an increase of 20%. Thus, a flat scalar background that is optimally detectable by Advanced LIGO may still be detected using existing techniques tailored to tensor polarizations. The SNR penalty is more severe for stochastic backgrounds of moderate positive slope. For α S = 2, Advanced LIGO can optimally detect a scalar background of amplitude Ω S 0 = 1.3 × 10 −9 , while existing methods would detect only a background of amplitude Ω S 0 = 4.4 × 10 −9 , a factor of 3.4 larger. Since the SNR of the stochastic search accumulates only as SNR ∝ √ T , even a small decrease in sensitivity can result in a somewhat severe increase in the time required to make a detection. To illustrate this, Fig. 4 shows the ratio T Naive /T Optimal between the observing times required for Advanced LIGO to detect vector (red) and scalar (green) backgrounds using existing "naive" methods and optimal methods. Although we noted above that existing methods incur little sensitivity loss to flat scalar backgrounds, the detection of such backgrounds would nevertheless require at least 50% more observing time with existing searches. Since the stochastic background is expected to be optimally detected only after several years, even a 50% increase potentially translates into years of additional observation time, a requirement which may well stress standard experimental lifetimes and operational funding cycles. Naive detection of a scalar background with α S = 2, for comparison, would require nearly twelve times the observing time. Figs. 3 and 4 both show conspicuous kinks occurring at α S ≈ 1.75 and α V ≈ 2.5. These features are due to severe systematic parameter biases incurred when recovering vector and scalar backgrounds with a purely tensorial model. For vector and scalar backgrounds of with α a 3, the best-fit slope α T (which maximizes the recovered SNR) is biased towards large values. Meanwhile, vector and scalar backgrounds with α a 1 bias α T in the opposite direction, towards smaller values. The sharp kinks in Fig. 3 and 4 occur at the transition between these two regimes. Such biases indicate another pitfall of existing search methods designed only for tensor-polarizations. Even if a vector or scalar-polarized background is recovered with minimal SNR loss, without some independent confirmation we may remain entirely unaware that the detected background indeed violates general relativity (see Sect. IV below). Furthermore, we would suffer from severe "stealth bias," unknowingly recovering heavilybiased estimates of the amplitude and spectral index of the stochastic background [77,78]. IV. IDENTIFYING ALTERNATIVE POLARIZATIONS We have seen in Sect. III that, even when using existing methods assuming only standard tensor polarizations, Advanced LIGO may still be capable of detecting a stochastic background of vector or scalar modes (albeit after potentially much longer observation times). Detection is only the first of two hurdles, though. Once the stochastic background has been detected, we will still need to establish whether it is entirely tensor-polarized, or if it contains vector or scalar-polarized gravitational waves. Since tensor, vector, and scalar gravitational-wave polarizations each enter into cross-correlation measurements [Eq. (2)] with unique overlap reduction functions, the polarization content of a detected stochastic background is in principle discernible from the spectral shape ofĈ(f ). As an example, Fig. 5 shows simulated cross-correlation measure-mentsĈ(f ) for both purely tensor (blue) and purely scalarpolarized (green) backgrounds after three years of observation with design-sensitivity Advanced LIGO. The left-hand side shows simulated measurements of extremely strong backgrounds, with spectra Ω T (f ) = 5 × 10 −8 (f /f 0 ) 2/3 and Ω S (f ) = 1.8 × 10 −7 (f /f 0 ) 2/3 ; amplitudes are chosen such that each background has expected SNR OPT = 150 after three years of observation. The dashed curves trace the expectation values Ĉ (f ) of the cross-correlation spectra for each case, while the solid curves show a particular instantiation of measured values. The alternating signs (positive or negative) of each spectrum are determined by the tensor and scalar overlap reduction functions, which have zero-crossings at different characteristic frequencies (see Fig. 2). As a result, tensor and scalar-polarized signals each impart a unique shape to the cross-correlation spectra, offering a means of discriminating between the two cases. As mentioned above, though, the backgrounds shown on the left side of Fig 5 are unphysically loud, with SNR OPT = 152 and 148 for the simulated tensor and scalar backgrounds, respectively. A tensor background of this amplitude would have been detectable with the standard isotropic search over Advanced LIGO's O1 observing run [24]. Since stochastic searches accumulate SNR over time, the first detection of the stochastic background will necessarily be marginal; in this case the presence of alternative gravitational-wave polarizations would not be clear. To demonstrate this, the right side of Fig. 5 shows the simulated recovery of weaker tensor and scalar backgrounds of spectral shape Ω T (f ) = 1.7 × 10 −9 (f /f 0 ) 2/3 and Ω S (f ) = 6.1 × 10 −9 (f /f 0 ) 2/3 , again after three years of observation with Advanced LIGO. These amplitudes correspond to expected SNR OPT = 5 after three years. While Advanced LIGO would still make a very confident detection of each background, with SNR OPT = 6.7 and 7.8 for the simulated tensor and scalar cases, the backgrounds' polarization content is no longer obvious. Interestingly, even when naively searching for purelytensor polarized backgrounds, design-sensitivity Advanced LIGO still detect the "quiet" scalar example (on the right side of Fig. 5) with SNR = 5.0. When assuming a priori that the stochastic background is purely tensor-polarized, any vector or scalar components detected with existing techniques may therefore be mistaken for ordinary tensor modes. Not only would vector or scalar components fail to be identified, but, as discussed in Sect. III, they would heavily bias parameter estimation of the tensor energy-density spectrum. If we wish to test general relativity with the stochastic background, we will therefore need to develop new tools in order to formally quantify the presence (or absence) of vector or scalar polarizations. Additionally, while we have so far investigated only backgrounds of pure tensor, vector, or scalar polarization, most plausible alternative theories of gravity will predict backgrounds of mixed polarization, with vector or scalar components in addition to a tensor component. Any realistic approach must therefore be able to handle a stochastic background of completely generic polarization content. Our approach will be to detect and classify the stochas- tic background using Bayesian model selection, adapting the method used in Ref. [32] to study the polarization content of continuous gravitational-wave sources. First, we will define an odds ratio O SIG N between signal (SIG) and noise (N) hypotheses to determine if a stochastic background (of any polarization) has been observed. Once a background is detected, we then construct a second odds ratio O NGR GR to determine if the background contains only tensor polarization (GR hypothesis) or if there is evidence of alternative polarizations (the NGR hypothesis). We describe the definition and construction of O SIG N and O NGR GR in Appendix C. Unlike existing detection methods that assume a pure tensor background, our scheme allows for the detection of generically-polarized stochastic backgrounds. It encapsulates the optimal detection of tensor, vector, and scalar polarizations as described in Sect. III, and moreover enables the detection of more complex backgrounds of mixed polarization. To compute the odds ratios O SIG N and O NGR GR , we use the PyMultiNest package [79], which implements a Python wrapper for the nested sampling software MultiNest [80][81][82]. MultiNest, an implementation of the nested sampling algorithm [83,84], is designed to efficiently evaluate Bayesian evidences [see Eq. (C1)] in high-dimensional parameter spaces, even in the case of large and possibly-curving parameter degeneracies. At little additional computational cost, MultiNest also returns posterior probabilities for each model parameter, allowing for parameter estimation in addition to model selection. Details associated with running MultiNest are given in Appendix D. Our approach fundamentally differs from the strategy proposed by Nishizawa et al. in Refs. [28][29][30]. Nishizawa et al. endeavor to separate and measure the background's tensor, vector, and scalar content within each frequency bin. To solve for these three unknowns, three pairs of gravitationalwave detectors are required to break the degeneracy between polarizations. A nice feature of this method is that it allows for the separation of polarization modes without the need for a parametrized model of the background's energy-density spectrum. However, it has several drawbacks. First, the Nishizawa et al. component separation scheme requires at least three detectors. Even then, this method is not very sensitive; covariances between polarization modes mean that only very loud backgrounds can be separated and independently detected with reasonable confidence. Finally, Nishizawa et al. are largely concerned with the detection of a background, not the characterization of its spectral shape. Ref. [30] does discuss parameter estimation on the stochastic background using a Fisher matrix formalism, but there are very well-known problems with this approach [85]. Our method is more aggressive. Rather than attempting to resolve the relative polarization content within each frequency bin, we assume a power-law model for the energy-density in each polarization mode (see Appendix C). This allows us to confidently detect far weaker signals than the Nishizawa et al. approach. While this approach is potentially susceptible to bias if our model poorly fits the true background, it is a reasonable model for astrophysically plausible scenarios. Even if the true background differs significantly from this model, we find in Sect. VI that potential bias is negligible. Another advantage of our method is that it can be used with only two detectors and hence can be applied today, rather than waiting for the construction of future gravitational-wave detectors. Finally, in Sect. V, we show that our Bayesian approach allows for full parameter estimation on the stochastic background, which properly takes into account the full degeneracies between background parameters (something a Fisher ma-trix analysis cannot do). A. Backgrounds of Single Polarizations As a first demonstration of this machinery, we explore the simple cases of purely tensor, vector, or scalar-polarized stochastic backgrounds. Shown in Fig. 6 are distributions of odds ratios O SIG N and O NGR GR obtained for simulated observations of both tensor and scalar backgrounds, each of slope α = 2/3 (the characteristic slope of a tensor binary black hole background). For each polarization, we consider two choices of amplitude, corresponding to SNR OPT = 5 and 10 after three years of observation with design-sensitivity Advanced LIGO. For comparison, the hatched grey distributions show odds ratios obtained in the presence of pure Gaussian noise. As seen in the left-hand side of Fig. 6, Gaussian noise yields a narrow odds ratio distribution centered at ln O SIG N ≈ −1.0 . In contrast, the simulated observations of tensor and scalar backgrounds yield large, positive odds ratios, wellseparated from Gaussian noise. Note that the tensor and scalar distributions lie nearly on top of one another, as O SIG N depends primarily on the optimal SNR of a background and not its polarization content. The right-hand side of Fig. 6, in turn, shows the odds ratios O NGR GR quantifying the evidence for alternative polarization modes. In the case of pure Gaussian noise, we again see a narrow distribution of odds ratios, centered at ln O NGR GR ≈ −0.4. In the absence of informative data, our analysis thus slightly favors the GR hypothesis. This can be understood as a consequence of the implicit Bayesian "Occam's factor," which penalizes the more complex NGR hypothesis over the simpler GR hypothesis. Simulated observations of scalar backgrounds, in turn, yield large positive values for ln O NGR GR , correctly preferencing the NGR hypothesis. In contrast, pure tensor backgrounds yield negative ln O NGR GR . Interestingly, the recovered odds ratios do not grow increasingly negative with larger tensor amplitudes, but instead saturate at ln O NGR GR ≈ −1.4. This reflects the fact that a non-detection of vector or scalar polarizations can never strictly rule out their presence, but only place an upper limit on their amplitudes. In other words, a strong detection of a pure tensor stochastic background cannot provide evidence for the GR hypothesis, but at best only offers no evidence against it. This behavior is in part due to our choice of amplitude priors, which allow for finite but immeasurably small vector and scalar energy densities (see Appendix C). As seen earlier in Fig. 6, ln O NGR GR saturates at −1.4 for loud tensor backgrounds. In the case of vector and scalar backgrounds, on the other hand, ln O NGR GR grows quadratically with increasing amplitude. In particular, ln O NGR GR is proportional to the squared SNR of the residuals between the observedĈ(f ) and the best-fit tensor model. We begin to see a strong preference for the NGR hypothesis when these residuals become statistically significant. B. Backgrounds of Mixed Polarization So far we have considered only cases of pure tensor, vector, or scalar polarization. Plausible alternative theories of gravity, however, would typically predict a mixed background of multiple polarization modes. How does our Bayesian machinery handle a background of mixed polarization? To answer this question, we will investigate backgrounds of mixed tensor and scalar polarization. Figure 8 shows values of O SIG N and O NGR GR (left and right-hand sides, respectively) as a function of the amplitude of each polarization. While we allow the amplitudes to vary, we fix the tensor and scalar slopes to α T = 2/3 (as predicted for binary black hole backgrounds) and α S = 0. In the left side Fig. 8, the recovered values of ln O SIG N simply trace contours of total energy. Thus the detectability of a mixed background depends only on its total measured energy, rather than its polarization content. Meanwhile, three distinct regions are observed in the right-hand subplot. First, for small tensor and scalar amplitudes (log Ω T From the odds ratio alone we cannot infer which specific polarizations -vector and/or scalar -are present in the background. While we found above that Advanced LIGO can identify mixed tensor-scalar backgrounds as non-tensorial when log Ω S 0 −8, this does not imply that we can successfully identify the scalar component as such, only that our measurements are not consistent with tensor polarization alone (see Sect. V). The future addition of new gravitational wave detectors will extend the reach of stochastic searches and help to break degeneracies between backgrounds of different polarizations. This expansion recently began with the completion of Advanced Virgo, which joined Advanced LIGO during its O2 observing run in August 2017 [2,7]. It is therefore interesting to investigate how the introduction of Advanced Virgo will improve the above results. Given detectors indexed by i ∈ {1, 2, ...}, the total SNR of a stochastic background is the quadrature sum of SNRs from each detector pair [68]: where each SNR ij is computed following Eq. (7). Naively, the SNR with which a background is observed is expected to increase as SNR ∝ √ N , where N is the total number of available detector pairs (three in the case of the Advanced LIGO-Virgo network). However, both the Hanford-Virgo and Livingston-Virgo pairs exhibit reduced sensitivity to the stochastic background due to their large physical separations. This fact is reflected in their respective overlap reduction functions, which are a factor of several smaller in magnitude than the Hanford-Livingston overlap reduction functions (see Fig. 2). Given three independent detector pairs (and hence three independent measurements at each frequency), one can in principle directly solve for the unknown tensor, vector, and scalar contributions to the background in each frequency bin [16,[28][29][30]. This component separation scheme can be performed without resorting to a model for the stochastic energy-density spectrum. However, frequency-by-frequency component separation is unlikely to be successful using the LIGO-Virgo network, due to the large uncertainties in the measured background at each frequency. Instead, when considering joint Advanced LIGO-Virgo observations we will again apply the Bayesian framework introduced above, leveraging measurements made at many frequencies in order to constrain the power-law amplitude and slope of each polarization mode. To quantify the extent to which Advanced Virgo aids in the detection of the stochastic background, we again consider simulated observations of a mixed tensor (slope α T = 2/3) and scalar (slope α S = 0) background, this time with a threedetector Advanced LIGO-Virgo network. Our Bayesian formalism is easily extended to accommodate the case of multiple detector pairs; details are given in Appendix C. The odds ratios obtained from our simulated Advanced LIGO-Virgo observations are shown in Fig. 9 for various tensor and scalar amplitudes. The inclusion of Advanced Virgo yields no clear improvement over the Advanced LIGO results in Fig. 8. Due to its large distance from LIGO, Advanced Virgo does not contribute more than a small fraction of the total observed SNR. As a result, the combined Hanford-Livingston-Virgo network both detects (as indicated with O SIG N ) and identifies (via O NGR GR ) the scalar background component with virtually the same sensitivity as the Hanford-Livingston network alone. V. PARAMETER ESTIMATION ON MIXED BACKGROUNDS Parameter estimation will be the final step in a search for a stochastic background of generic polarization. If a gravitational-wave background is detected (as inferred from O SIG N ), how well can Advanced LIGO constrain the properties of the background? Alternatively, if no detection is made, what upper limits can Advanced LIGO place on the background amplitudes of each polarization mode? We investigate these questions through three case studies: an observation of pure Gaussian noise, a standard tensor stochastic background, and a background of mixed tensor and scalar polarizations. The simulated background parameters used for each case are listed in Table I. When performing model selection above, the odds ratios O SIG N and O NGR GR were constructed by independently allowing for each combination of tensor, vector, and scalar modes (see Appendix C). Parameter estimation, meanwhile, must be performed in the context of a specific background model. For the case studies below, we will adopt the broadest possible hypothesis, allowing for all three polarization modes (the TVS hypothesis in Appendix C). This choice will allow us to place simultaneous constraints on the presence of tensor, vector, and scalar polarizations in the stochastic background. Parameter estimation is achieved using MultiNest, which returns samples drawn from the measured posterior distributions. There are several key subtleties that must be understood when interpreting the parameter estimation results presented below. First, whereas standard tensor upper limits are conventionally defined with respect to a single, fixed slope [24,75], we will quote amplitude limits obtained after marginalization over spectral index. This approach concisely combines information from the entire posterior parameter space to offer a single limit on each polarization considered. As a result, however, our simulated upper limits presented here should not be directly compared to those from standard searches for tensor backgrounds. Secondly, parameter estimation results are contingent upon the choice of a specific model. While we will demonstrate parameter estimation results under our TVS hypothesis (see Appendix C), other hypotheses may be better suited to answering other experimental questions. For example, if we were specifically interested in constraining scalartensor theories (which a priori do not allow vector polarizations), we would instead perform parameter estimation under the TS hypothesis. And if our goal was to perform a standard stochastic search for a purely tensor-polarized background, we would restrict to the T hypothesis. Although these various hypotheses all contain an analogous parameter Ω T 0 , the resulting upper limits on Ω T 0 will generically be different in each case. In short, different experimental questions will yield different answers. Case 1: Gaussian Noise First, we consider the case of pure noise, producing a simulated three-year observation of Gaussian noise at Advanced LIGO's design sensitivity. The resulting TVS posteriors are shown in Fig. 10. The colored histograms along the diagonal show the marginalized 1D posteriors for the amplitudes and slopes of the tensor, vector, and scalar components (blue, green, and red, respectively). The priors placed on each parameter are indicated with a dashed grey curve. Above each posterior we quote the median posterior value as well as ±34% credible limits. The remaining subplots illustrate the joint 2D posteriors between each pair of parameters. For this simulated Advanced LIGO observation, we obtain log O SIG N = −1.1, consistent with a null detection. Accordingly, the posteriors on Ω T 0 , Ω V 0 , and Ω S 0 are each consistent with the lower bound of our amplitude prior (at log Ω Min = −13). Meanwhile, the posteriors on spectral indices α T , α V , and α S simply recover our chosen prior. The 95% credible upper limits on each amplitude are log Ω T 0 < −9.8, log Ω V 0 < −9.7, and log Ω S 0 < −9.3. In Fig. 11 we show the posteriors obtained if we additionally include design-sensitivity Advanced Virgo (incorporating simulated measurements for the HV and LV detector pairs). For reference, the grey histograms show the posteriors from Fig. 10 obtained by Advanced LIGO alone. The Advanced LIGO-Virgo posteriors are virtually identical to those obtained from Advanced LIGO alone, with 95% credible upper limits of log Ω T 0 < −9.9, log Ω V 0 < −9.6, and log Ω S 0 < −9.4. In the case of a null-detection, then, the inclusion of Advanced Virgo does not notably improve the upper limits placed on the amplitudes of tensor, vector, and scalar backgrounds. Fig. 13. Once again, the grey histograms show parameter estimation results from Advanced LIGO alone. Although Virgo does not improve our confidence in the detection, it can serve to break degeneracies present between different polarization modes. We begin to see this behavior in Fig. 13 Table I), under the TVS hypothesis. The subplots along the diagonal show marginalized posteriors for the amplitudes and slopes of the tensor, vector, and scalar backgrounds (blue, red, and green, respectively), while the remaining subplots show the 2D posterior between each pair of parameters. Each amplitude posterior is consistent with our lower prior bound, reflecting the non-detection of a stochastic background. Table I Table I). While the Ω T 0 and Ω S 0 posteriors are locally peaked about the true values, much of the observed energy is mistaken for vector polarizations. Thus Advanced LIGO alone is unable to break the degeneracy between tensor, vector, and scalar amplitudes. In contrast to the results in Fig. 14, the degeneracy between polarization modes is completely broken when including Advanced Virgo. Thus, while Advanced Virgo does not particularly improve prospects for the detection of a mixed background, it can significantly improve our ability to perform parameter estimation on multiple modes simultaneously. Virgo, we obtain a marginally tighter 68% credible interval of −8.9 ≤ log Ω T 0 ≤ −8.7 on the tensor amplitude, and slightly improved upper limits of log Ω V 0 < −9.3 and log Ω S 0 < −9.2 on vector and scalar amplitudes. Case 3: Tensor and Scalar Backgrounds As discussed above, most alternative theories of gravity would predict a stochastic background of mixed polarization. For our final case study, we therefore consider a mixed background with both tensor (log Ω The posteriors obtained for this data are shown in Fig. 14. Despite the strength of the simulated stochastic signal, we see that parameter estimation results are dominated by degeneracies between the different polarization modes. Although the tensor and scalar amplitude posteriors are locally peaked about their true values, much of the background's energy is misattributed to vector modes, illustrating that potential severe degeneracies persist even at high SNRs. These degeneracies are exacerbated for backgrounds with small or negative spectral indices, as in the present case. Such backgrounds preferentially weight low frequencies where the Advanced LIGO overlap reduction functions are all similar (see Fig. 2). This example serves to illustrate that, while Advanced LIGO can likely identify the presence of alternative polarizations through the odds ratio O NGR GR , Advanced LIGO alone is unable to determine which modes (vector or scalar) have been detected. In contrast, the degeneracies in Fig. 14 are completely broken with the inclusion of Advanced Virgo. Whereas the Ω V 0 posterior is strongly peaked in Fig. 14, we see in Fig. 15 that the posterior is instead entirely consistent with our lower prior bound when including Advanced Virgo. The tensor and scalar amplitude posteriors, meanwhile, are each more strongly-peaked about their correct values and are now inconsistent with the lower amplitude bound. Thus, while Advanced Virgo generally does not improve our ability to detect a stochastic background, we see that it can significantly improve prospects for simultaneous parameter estimation of multiple polarizations. VI. BROKEN TENSOR SPECTRA The stochastic search presented here offers a means to search for alternative gravitational-wave polarizations in a nearly model-independent way. Unlike direct searches for compact binary coalescences, our search makes minimal assumptions about the source and nature of the stochastic background. We do, however, make one notable assumption: Odds ratios O NGR GR obtained for simulated Advanced LIGO observations of tensor-polarized broken power law backgrounds with energy density spectra given by Eq. (13). The parameters α1 and α2 are the backgrounds' slopes below and above the "knee" frequency f k , which we take to be 30 Hz (in the center of the stochastic sensitivity band). We scale the amplitude Ω0 of each background such that it is optimally detectable with SNROPT = 5 after the simulated observation period. By design, these backgrounds are not well-described by single power laws, the form explicitly assumed in our search. Despite this fact, we find that these backgrounds are not systematically misclassified as containing vector or scalar polarization. that the energy density spectra Ω a (f ) are well-described by power laws in the Advanced LIGO frequency band. This is expected to be a reasonable approximation for most predicted astrophysical sources of gravitational waves. The backgrounds expected from stellar-mass binary black holes [74], core-collapse supernovae [38], and rotating neutron stars [86][87][88], for instance, are all well-modelled by power laws in the Advanced LIGO band. It may be, however, that the stochastic background is in fact not well-described by a single power law. This may be the case if, for instance, the background is dominated by high-mass binary black holes, an excess of systems at high redshift, or previously-unexpected sources of gravitational waves [74]. Given that our search allows only for power-law background models, how would we interpret a non-power-law background? In particular, if the stochastic background is purely tensorial (obeying general relativity) but is not welldescribed by a power-law, would our search mistakenly claim evidence for alternative polarizations? To investigate this question, we consider simulated Advanced LIGO observations of pure tensor backgrounds described by broken power laws: Here, Ω 0 is the background's amplitude at the "knee frequency" f k , while α 1 and α 2 are the slopes below and above the knee frequency, respectively. We will set the knee fre-quency to f k = 30 Hz, placing the backgrounds' knees in the most sensitive band of the stochastic search. The odds ratios O NGR GR we obtain for these broken power laws are shown in Fig. 16 as a function of the two slopes α 1 and α 2 . Each simulation assumes three years of observation at design-sensitivity, and the amplitudes Ω 0 are scaled such that each background has expected SNR OPT = 5 after this time. Any trends in Fig. 16 are therefore due to the backgrounds' spectral shapes rather than their amplitudes. If tensor broken power laws are indeed misclassified by our search, we should expect large, positive ln O NGR GR values in Fig. 16. Instead, we see that broken power laws are not systematically misclassified. When α 1 and α 2 are each positive, we recover ln O NGR GR ≈ −1.5, correctly classifying backgrounds as tensorial despite the fact that they are not described by power laws. When α 1 < 0, meanwhile, we recover odds ratios scattered about ln O NGR GR ≈ 0. This simply reflects the fact that when α 1 is negative the majority of a background's SNR is collected at low frequencies where Advanced LIGO's tensor, vector, and scalar overlap reduction functions are degenerate. In such a case we do not show preference for either model over the other. Note that we find ln O NGR GR ≈ 0 even along the line α 2 = α 1 (for α 1 < 0), where the background is described by a single power law. We expect broken power laws to be most problematic when α 1 > 0 and α 2 < 0; in this case a background's SNR is dominated by a small frequency band around the knee itself. This would be the case if, for instance, the stochastic background were dominated by unexpectedly massive binary black hole mergers [74]. Figure 16 does suggest a larger scatter in log O NGR GR for such backgrounds. Even in this region, however, there is not a systematic bias towards larger values of O NGR GR , and the largest recovered odds ratios have log O NGR GR 2.5, well below the level required to confidently claim evidence for the presence of alternative polarizations. Despite the fact that we assume purely power-law models for the stochastic energy-density spectra, our search appears reasonably robust against broken power law spectra that are otherwise purely tensor-polarized. In particular, in order to be mistakenly classified by our search, a tensor stochastic background would have to emulate the pattern of positive and negative cross-power associated with the vector and/or scalar overlap reduction functions (see, for instance, Fig. 5). This is simply not easy to do without a pathological background. While we have demonstrated this only for Advanced LIGO, we find similarly robust results for three-detector Advanced LIGO-Virgo observations. Nevertheless, when interpreting odds ratios O NGR GR it should be kept in mind that the true stochastic background may deviate from a power law. Even if a broken tensor background is not misclassified in our analysis, the parameter estimation results we obtain would likely be incorrect (another example of so-called "stealth bias"). It should be pointed out, though, that our analysis is not fundamentally restricted to power-law models. While we adopt power-law models here for computational simplicity, our analysis can be straightforwardly expanded in the future to include more complex models for the stochastic energy-density spectrum. VII. DISCUSSION The direct detection of gravitational waves by Advanced LIGO and Virgo has opened up new and unique prospects for testing general relativity. One such avenue is the search for vector and scalar gravitational-wave polarizations, predicted by some alternative theories of gravity but prohibited by general relativity. Observation of vector or scalar polarizations in the stochastic background would therefore represent a clear violation of general relativity. While the first preliminary measurements have recently been made of the polarization of GW170814, our ability to study the polarization of transient gravitational-wave signals is currently limited by the number and orientation of current-generation detectors. In contrast, searches for long-duration sources like the stochastic background offer a promising means of directly measuring gravitational-wave polarizations with existing detectors. In this paper, we explored a procedure by which Advanced LIGO can detect or constrain the presence of vector and scalar polarizations in the stochastic background. In Sect. III, we found that a stochastic background dominated by alternative polarization modes may be missed by current searches optimized only for tensor polarizations. In particular, backgrounds of vector and scalar polarizations with large, positive slopes may take up to ten times as long to detect with current methods, relative to a search optimized for alternative polarizations. In Sect. IV, we therefore proposed a Bayesian method with which to detect a generically-polarized stochastic background. This method relies on the construction of two odds ratios (see Appendix C). The first serves to determine if a stochastic background has been detected, while the second quantifies evidence for the presence of alternative polarizations in the background. This search has the advantage of being entirely generic; it is capable of detecting and identifying stochastic backgrounds containing any combination of gravitational-wave polarizations. With this method, we demonstrated flat scalar-polarized backgrounds of amplitude Ω S 0 ≈ 2 × 10 −8 can be confidently identified as non-tensorial with Advanced LIGO. In Sect. V, we then considered the ability of Advanced LIGO to perform simultaneous parameter estimation on tensor, vector, and scalar components of the stochastic background. After three years of observation at design sensitivity, Advanced LIGO will be able to limit the amplitudes of tensor, vector, and scalar polarizations to Ω T 0 < 1.6 × 10 −10 , Ω V 0 < 2.0 × 10 −10 , and Ω S 0 < 5.0 × 10 −10 , respectively, at 95% credibility. If, however, a stochastic background of mixed polarization is detected, Advanced LIGO alone cannot precisely determine the parameters of the tensor, vector, and/or scalar components simultaneously due to large degeneracies between modes. We also considered how the addition of Advanced Virgo to the Hanford-Livingston network affects the search for alternative polarizations. In Sect. IV, we found that addition of Advanced Virgo does not particularly increase our ability to detect or identify backgrounds of alternative polarizations. However, we found in Sect.V that Advanced Virgo does significantly improve our ability to perform parameter estima-tion on power-law backgrounds, breaking the degeneracies that plagued the Hanford-Livingston analysis. Relative to other modeled searches for gravitational waves, the stochastic search described here has the advantage of being nearly model-independent. We have, however, made one large assumption: that the tensor, vector, and scalar energydensity spectra are well-described by power laws in the Advanced LIGO band. Finally, in Sect. VI we explored the implications of this assumption, asking the question: would tensor backgrounds not described by power laws be mistaken for alternative polarizations in our search? We found that our proposed Bayesian method is reasonably robust against this possibility. In particular, even pure tensor backgrounds with sharply-broken power law spectra are not systematically misidentified by our search. The non-detection of alternative polarizations in the stochastic background may yield interesting experimental constraints on extended theories of gravity. Meanwhile, any experimental evidence for alternative polarizations in the stochastic background would be a remarkable step forward for experimental tests of gravity. Of course, if future stochastic searches do yield evidence for alternative polarizations, careful study would be required to verify that this result is not due to unmodeled effects like non-Gaussianity or anisotropy in the stochastic background [26,[89][90][91][92]. Comparison to polarization measurements of other long-lived sources like rotating neutron stars [31,32] will additionally aid in the interpretation of stochastic search results. Several future developments may further improve the ability of ground-based detectors to detect alternative polarization modes in the stochastic background. First, the continued expansion of the ground-based detector network will improve our ability to both resolve the stochastic background and accurately determine its polarization content. Secondly, while we presently assume that the stochastic background is Gaussian, the background contribution from binary black holes is expected to be highly non-Gaussian [25]. Future stochastic searches may therefore be aided by the development of novel data analysis techniques optimized for non-Gaussian backgrounds [90][91][92]. PHY-1204944 at the University of Minnesota. M.S. is partially supported by STFC (UK) under the research grant ST/L000326/1. E.T. is supported through ARC FT150100281 and CE170100004. This paper carries the LIGO Document Number ligo-p1700059 and King's College London report number KCL-PH-TH/2017-25. A. OVERLAP REDUCTION FUNCTIONS The sensitivity of a two-detector network to a stochastic gravitational-wave background is quantified by the overlap reduction function [68,69] where ∆x is the displacement vector between detectors, c is the speed of light, and F A 1/2 (Ω) are the antenna patterns describing the response of each detector to gravitational-waves of polarization A propagating from the directionΩ. The overlap reduction function is effectively the sky-averaged product of the two detectors' antenna patterns, weighted by the additional phase accumulated as a gravitational wave propagates from one site to the other. In the standard stochastic search, the summation in Eq. (A1) is taken over the tensor plus and cross polarizations. When extending the stochastic search to generic gravitationalwave polarizations, we now must consider three separate overlap reduction functions for the tensor, vector, and scalar modes [28]: (A2) We normalize these functions such that γ T (f ) = 1 for coincident, co-aligned detectors; detectors that are rotated or separated relative to one another have γ T (f ) < 1. The amplitudes of γ V (f ) and γ S (f ), meanwhile, express relative sensitivities to vector and scalar backgrounds. Note that the normalization of γ S (f ) differs from that of Nishizawa et al. in Ref. [28]. This difference is due to Nishizawa et al.'s definition of the longitudinal polarization tensor as rather than the more common e l =Ω ⊗Ω (A4) (to distinguish between these two conventions, the quantities adopted by Nishizawa et al. will be underscored with tildes). As a consequence, Nishizawa et al. obtain a longitudinal antenna pattern which differs by a factor of √ 2 from the conventional form Correspondingly, the quantity ∼ Ω l (f ) defined by Nishizawa et al. is actually half of the canonical energy density in longitudinal gravitational waves: While each overlap reduction function may be calculated numerically via Eq. (A2), they may also be analytically expanded in terms of spherical Bessel functions [28,68]. See Ref. [28] for definitions of the tensor, vector, and scalar overlap reduction functions in this analytic form. Note, however, that these definitions follow Nishizawa et al.'s normalization convention as discussed above; the analytic expression given for γ S (f ) must be divided by 3 to match our Eq. (A2). B. OPTIMAL SIGNAL-TO-NOISE RATIO Searches for the stochastic background rely on measure-mentsĈ(f ) of the cross-power between two detectors. As discussed in Sect. III, the expectation value and variance ofĈ(f ) are given by Eqs. (5) and (6), respectively. Here, we derive the optimal broadband signal-to-noise ratio [Eq. (9)], which combines a spectrum of cross-correlation measurements into a single detection statistic. Given a measured spectrumĈ(f ) and associated uncertainties σ 2 (f ), a single broadband statistic may be formed via the weighted sumĈ where w(f ) is a set of yet-undefined weights. The mean and variance ofĈ are and where γ a (f )Ω a (f ) denotes summation a γ a (f )Ω a (f ) over polarization modes a ∈ {T, V, S}. We define a broadband signal-to-noise ratio by SNR = C/σ. In the limit df → 0, this quantity may be written where we have substituted Eq. (6) for σ 2 (f ) and made use of the inner product defined in Eq. (8). The expected SNR is maximized when the chosen weights are equal to the true background, such that w(f ) = γ a (f )Ω a GW (f ). In this case, the optimal expected SNR of the stochastic background becomes Here, the likelihood L(Ĉ|θ A , A) gives the conditional probability of the measured data under hypothesis A for fixed parameter values, while π(θ A |A) is the prior probability set on these parameters. When selecting between two such hypotheses A and B, we may define an odds ratio The first factor in Eq. (C2), called the Bayes factor, is the ratio between the Bayesian evidences for hypotheses A and B. The second term, meanwhile, is the ratio between the prior probabilities π(A) and π(B) assigned to each hypothesis. To construct odds ratios for our stochastic background analysis, we will first need the likelihood L({Ĉ}|θ, A) of a measured cross-power spectrum under model A with some parameters θ. In the presence of Gaussian noise, the likelihood of measuring a specificĈ(f ) within a single frequency bin is [68,74,93] L Ĉ (f )|θ, A ∝ exp with variance σ 2 (f ) given by Eq. (6). Here, Ω a A (θ; f ) is our model for the energy-density spectrum under hypothesis A and with parameters θ, evaluated at the given frequency f . The full likelihood L({Ĉ}|θ, A) for a spectrum of crosscorrelation measurements is the product of the individual likelihoods in each frequency bin: where N is a normalization coefficient and we have used the inner product defined by Eq. (8). As discussed in Sect. IV, we will seek to detect and characterize a generic stochastic background via the construction of two odds ratios: O SIG N , which indicates whether a background of any polarization is present, and O NGR GR , which quantifies evidence for the presence of alternative polarization modes. First consider O SIG N . Under the noise hypothesis (N), we assume that no signal is present [such that Ω a N (f ) = 0]. From Eq. (C4), the corresponding likelihood is simply The signal hypothesis (SIG) is somewhat more complex. The signal hypothesis is ultimately the union of seven distinct sub-hypotheses that together describe all possible combinations of tensor, vector, and scalar polarizations [32,94]. To understand this, first define a "TVS" hypothesis that allows for the simultaneous presence of tensor, vector, and scalar polarization. In this case, we will model the stochastic energydensity spectrum as a sum of three power laws with free parameters Ω a 0 and α a setting the amplitude and spectral index of each polarization sector. The priors on these parameters are given by Eqs. (C11) and (C12) below. In defining the TVS hypothesis, we have made the explicit assumption that tensor, vector, and scalar radiation are each present. This is not the only possibility, of course. A second distinct hypothesis, for instance, is that only tensor and vector polarizations exist. This is our "TV" hypothesis. We model the corresponding energy spectrum as In a similar fashion, we must ultimately define seven such hypotheses, denoted TVS, TV, TS, VS, T, V, and S, to encompass all combinations of tensor, vector, and scalar gravitationalwave backgrounds. Our complete signal hypothesis is given by the union of these seven sub-hypotheses [32,94]. For each signal sub-hypothesis, we adopt the log-amplitude and slope priors given below in Eqs. (C11) and (C12). Each of the signal sub-hypotheses are logically independent [32,94], and so the odds ratio O SIG N between signal and noise hypotheses is given by the sum of odds ratios between the noise hypothesis and each of the seven signal sub-hypotheses: As illustrated in Fig. 17, we assign equal prior probability to the signal and noise hypotheses. Within the signal hypothesis, we weight each of the signal sub-hypotheses equally, such that the prior odds between e.g. the T and N hypothesis is π(T)/π(N) = 1/7. We note that our choice of prior probabilities is not unique; there may exist other valid choices as well. Our analysis can easily accommodate different choices of prior weight. The odds ratio O NGR GR is constructed similarly. In this case, we are selecting between the hypothesis that the stochastic background is purely tensor-polarized (GR), or the hypothesis that additional polarization modes are present (NGR). The GR hypothesis is identical to our tensor-only hypothesis T from above: The NGR hypothesis, on the other hand, will be the union of the six signal sub-hypotheses that are inconsistent with general relativity: V, S, TV, TS, VS, and TVS. The complete odds ratio between NGR and GR hypothesis is then As shown in Fig. 17, we have assigned equal priors to the GR and NGR hypotheses as well as identical priors to the six NGR sub-hypotheses. In computing the odds ratios O SIG N and O NGR GR , we also need priors for the various parameters governing each model for the stochastic background. In the various energy-density models presented above, we have defined two classes of parameters: amplitudes Ω a 0 and spectral indices α a of the background's various polarization components. For each amplitude parameter, we will use the prior π(Ω 0 ) ∝ 1/Ω 0 (Ω Min ≤ Ω 0 ≤ Ω Max ) 0 (Otherwise) . This corresponds to a uniform prior in the log-amplitudes between log Ω Min and log Ω Max . In order for this prior to be normalizable, we cannot let it extend all the way to Ω Min = 0 (log Ω Min → −∞). Instead, we must choose a finite lower bound. While this lower bound is somewhat arbitrary, our results depend only weakly on the specific choice of bound [32]. In this paper, we take Ω Min = 10 −13 , an amplitude that is indistinguishable from noise with Advanced LIGO. Our upper bound, meanwhile, is Ω Max = 10 −6 , consistent with upper limits placed by Initial LIGO and Virgo [75]. We adopt a triangular prior on α, centered at zero: This prior has several desirable properties. First, it captures a natural tendency for spectral index posteriors to peak symmetrically about α = 0. As a result, our α posteriors reliably recover this prior in the absence of informative data (see Fig. 10, for example). Second, this prior preferentially weights shallower energy-density spectra. This quantifies our expectation that the stochastic background's energy density be distributed somewhat uniformly across logarithmic frequency intervals (at least in the LIGO band), rather than entirely at very high or very low frequencies. Alternatively, Eq. (C12) can be viewed as corresponding to equal priors on the background strength at two different frequencies. To understand this, first note that α may be written as a function of background amplitudes Ω 0 and Ω 1 at two frequencies f 0 and f 1 : The prior probability of a particular slope α is equal to the probability of drawing any two amplitudes Ω 1 and Ω 2 satisfying log(Ω 1 /Ω 2 ) = α log(f 1 /f 2 ). This is given by the convolution π(α) = π(log Ω 1 )π(log Ω 0 = log Ω 1 −α log(f 1 /f 0 ))d log Ω 1 . In Sects. IV and V, we additionally considered the performance of the three-detector Advanced LIGO-Virgo network. The Bayesian framework considered here is easily extended to accommodate multiple detector pairs. The three LIGO and Virgo detectors allow for the measurement of three cross-correlation spectra:Ĉ HL (f ),Ĉ HV (f ), andĈ LV (f ). In the small signal limit (Ω a (f ) 1), the correlations between these measurements vanish at leading order, and so the three baselines can be treated as statistically independent [68]. We can therefore factorize the joint likelihood for the three sets: substituting likelihoods of the form (C4) for each pair of detectors. Note that we have explicitly distinguished between the overlap reduction functions for each baseline, and N is again a normalization constant. Other than the above change to the likelihood, all other details of the odds ratio construction is unchanged when including three detectors. D. EVALUATING BAYESIAN EVIDENCES WITH MULTINEST Here we summarize details associated with using MultiNest to evaluate Bayesian evidences for various models of the stochastic background. The MultiNest algorithm allows for several user-defined parameters, including the number n of live points used to sample the prior volume and the sampling efficiency , which governs acceptance rate of new proposed live points (see e.g. Ref. [81] for details). MultiNest also provides the option to run in Default or Importance Nested Sampling (INS) modes, each of which use different methods to evaluate evidences [82]. To set the number of live points, we investigated the convergence of MultiNest's evidence estimates with increasing values of n. For a single simulated observation of a tensorial background (with amplitude Ω T 0 = 2 × 10 −8 and slope α T = 2/3), for instance, Fig. 18 shows the recovered evidence for the T hypothesis (see Appendix C above) as a function of n, using both the Default (blue) and INS modes (green). The results are reasonably stable for n 1000; we choose n = 2000 live points. Meanwhile, our recovered evidence estimates do not exhibit noticeable dependence on the sampling efficiency; we choose the recommended values = 0.3 for evidence evaluation and = 0.8 for parameter estimation [81]. In addition to computing Bayesian evidences, MultiNest also returns an estimate of the numerical error associated with each evidence calculation. See, for instance, the error bars in Fig. 18. To gauge the accuracy of these error estimates, we construct a single simulated Advanced LIGO observation of a purely-tensorial stochastic background (again with Ω T 0 = 2 × 10 −8 and α T = 2/3). We then use MultiNest to compute the corresponding TVS evidence 500 times, in both Default and INS modes. The resulting distributions of evidences are shown in Fig. 19. The dashed error bars show the averaged ±1σ intervals reported by MultiNest, while the solid bars show the ±1σ intervals obtained manually from the distributions. We see that the errors reported by MultiNest's Default mode appear to accurately reflect the numerical error in the evidence calculation, while the errors reported by the INS mode are underestimated by a factor of ∼ 2. Additionally, Fig. 19 illustrates several systematic differences between the Default and INS results. First, Default mode appears significantly more precise than INS mode, giving rise to a much narrower distribution of evidences. Not only is the INS evidence distribution wider, but it exhibits a large tail extending several units in evidence above the mean. We find that similarly long tails also appear for other pairs of injected signals and recovered models. For this reason, we choose to use MultiNest's Default mode in all evidence calculations. Typical numerical errors in Default mode are of order δ(evidence) ∼ 0.1, and so the uncertainty associated with a log-odds ratio is δ(ln O) ∼ √ 2δ(evidence), again of order 0.1. Additionally, we see that the peaks of the Default and INS distributions do not coincide. In general, the peaks of evidence distributions from the Default and INS modes lie ∼ 0.3 units apart. Thus there may be additional systematic uncertainties in a given evidence calculation. However, as long as we consistently use one mode or the other (in our case, Default mode), any uniform systematic offset in the evidences will simply cancel when we ultimately compute a log-odds ratio.
17,749.2
2017-04-26T00:00:00.000
[ "Physics" ]
Mathematical modeling of quantitative changes in hydrogen and oxide inclusions in aluminum alloy . In this article, mathematical modeling of quantitative changes in hydrogen and oxide inclusions in aluminum alloys is justified, developed, and analytically implemented. Usually, the methods of linear algebra are mostly used; in particular, the solutions of systems of inhomogeneous algebraic equations are obtained based on the method of Gauss, Cramer, and inverse matrices using the Maple 13 software package. Quantitative changes in hydrogen and oxide inclusions in the alloy are determined by a change in the average dispersion of the loaded flux. The connectivity functions of the change of oxide aluminum in the alloy (cid:2) (% ) with an increase in temperature Т ( 0 С ) during the loading of the charge into the liquid bath are obtained. The connectivity functions to determine the change quantity of hydrogen (cid:3) (cm 3 /100 g) in the alloy depending on the time t (minute) holding of the heated charge in the period of research is obtained. Based on functional dependencies, graphs of changes in the mainly desired parameters and numerical indexes in tabular form for engineering and applied calculations are constructed. In particular, graphs of the change quantity of hydrogen and oxide inclusions in the alloy with an increase of average dispersion of the flux d, graphs of change quantity of hydrogen with an increase in temperature during loading of the charge into the liquid bath, changes of the quantity of oxide aluminum in the alloy (cid:2) (%) with an increase in temperature Т ( 0 С ) , patterns of change quantity of hydrogen in the alloy (cid:3) (cm 3 /100 g) and quantity of oxide (cid:4) (%) were plotted depending on the time of holding the heated charge in period research . Introduction One of the most important areas of engineering materials science is the production (getting, producing) of new effective, and promising alloys for foundry production.At the same time, for the further improvement of modern technologies for processing of materials and details for general mechanical engineering, a powerful lever arm for the development of theoretical foundations and innovative technologies is the correct justification and formulation of the problem of using analytical research, in particular, the development and analytical realization of mathematical models. At present, the development of resource and energy-saving technologies in the smelting of aluminum alloys is of particular importance.Resource saving in melting aluminum alloys is of particular importance as heat exchange processes in which alloys are saturated with gas and non-metallic inclusions.Connecting with the development of the protective flux to reduce gas and other non-metallic inclusions, optimize the melting process and improve the quality of the obtained casting from aluminum alloys is one of the important tasks of our time.In this area, in many developed countries, such as the USA, Canada, Germany, France, Korea, Japan, Russia, Ukraine, and China, special attention is paid to reducing gas and other non-metallic inclusions in aluminum alloy casting.In general, numerous scientific works of foreign scientists and scientists from Uzbekistan are devoted to research, creation of new technologies or improvement of existing technologies for melting aluminum alloys, development of more effective compositions of protective fluxes used in the melting process of aluminum alloys [1]- [3]. Here is a brief overview of research conducted at many universities and institutes worldwide.In particular, the University of California (USA), Jinan University (China, scientists Min Zuo, Maximilian Sokoluk, Chezheng Cao), scientists from Canada (T.A. Utigard, R.R. Roy and K. Friesen), group of scientists from Great Britain and Italy Annalisa Pola, Marialaura Tocci, Plato Kapranos, scientists from Germany and Harbin University Jean Ducrocq, Szunyan Chan and R. Nabibullah (University of Pakistan) and others [4], [5]. Scientists of the CIS countries S.P. Zadrutsky, G.A. Rumyantsev, B.M. Nemenenok, I.A. Gorbel (National Technical University of the Republic of Belarus), S.V. Voronin, and P.S. Loboda (Samara National Research University named after Academician S.P. Korolev), V.A. Grachev (Penza State Technical University) and others conducted several research work to improve the mechanical properties of aluminum alloys obtained from gas melting aggregates [6], [8]. In the Republic of Uzbekistan, research works are underway to improve the technologies for melting aluminum alloys and protective flux composition for melting aluminum alloys, which helps to improve the quality of the melt.In addition, research was carried out to increase the melting process's efficiency and to use new protective materials and constructions to provide these technologies.For this, it is necessary to increase the priority of ongoing research works on the development of effective composition of protective flux, improving the efficiency of the use of flux during the melting process, which wildly used in aluminum alloy production.Scientists (E.Kh.Tulyaganov, N.Dj.Turakhodjaev and T.Kh.Tursunov, and others) investigated structural changes in casting from aluminum alloys depending on the mode of melt [9]- [11]. Based on an analytical review of world and domestic literature in the above areas was found that in many research of the authors' insufficient attention is paid to issues of the widespread use of mathematical modeling of the process under study. Methods In this article, the object of research is the mathematical modeling of quantitative changes in hydrogen and oxide inclusions in the composition of aluminum alloys.As known, the basis of the method of mathematical modeling is algorithmization.It should be emphasized that the word "algorithm" comes from the name of the Central Asian scientist of the 9th century, Muhammad ibn Musa Al-Khorezmi (who was born in the city of Khorezm, Uzbekistan) around 783 800 -about 850).Al-Khorezmi is a great mathematician, astronomer, geographer, and historian.Thanks to Al-Khorezmi, the terms "algorithm" and "algebra" appeared in mathematics. Modeling of foundry production is carried out on the basis of analytical and numerical methods, particularly methods of finite difference and finite element.To complete a mathematical model, it seems necessary to be able to combine theory and practice to solve engineering problems; choose measuring facilities following the required accuracy and operating conditions; the ability to follow metrological norms and rules, to comply with the requirements of national and international standards; use the physical-mathematical equipment for solving problems; use the basic concepts, laws, and methods of thermodynamics, chemical kinetics, heat, and weight transfer, choose and apply appropriate methods for modeling physical, chemical and technological processes; use information and technologies in solving of problems. The first step in our previous studies of obtaining promising alloys in foundry technology was to substantiate the problems and prospects for developing mathematical models of heat and mass transfer processes using linear algebra methods using the Lagrange interpolation polynomial.To develop and analytically implement the mathematical model as an object of study, the technologies developed and implemented in industrial production were chosen.In particular, the extraction of metals from liquid slag and increase of the operational properties of cast details from steel 45 [11]- [13].Based on the analytical implementation of mathematical models of the technological process, numerical values are determined, graphs of changes in the desired parameters are plotted, and recommendations are given for industrial production. In works [14]- [18], technology was developed to determine the flux composition for melting aluminum alloy to reduce the content of gas inclusions in the resulting melts.In particular, the influence of the flux composition, the influence of the technology of loading the flux into the melt, and the mode of melt to gas content in the resulting aluminum alloy are determined.Now let's move on to the development and analytical implementation of a mathematical model for quantitative changes in hydrogen and oxide inclusions in an aluminum alloy.The mathematical model of this process was developed based on experimental data. The task of the mathematical modeling, on another side, is the analytical determination of experimental data based on the composed mathematical function of the quantitative change in the content of hydrogen and oxide in the alloy and the mathematical method to determine their content without additional experimental research.Determining experimental data using a mathematical function is considered equivalent to the problem of uniquely determining the coefficients of a high degree polynomial [20], [21].It should be denoted, in further, all the tables with data from the experimental research and constructed graphs are given in the third section of the research results and discussion. Firstly we determine the change in the quantity of hydrogen in the alloy with a change in the average dispersion of the loaded flux.Based on the data in the above table № 1, it is possible to write the followings: The system of algebraic equations is: Thus, the function which characterized the change in the amount of hydrogen in the alloy with a change in the average dispersion of the flux is as follows: Depending on this, it is possible to conclude that obtained results (3) help to unambiguously determine the amount of hydrogen in the alloy with a change in the average dispersion of the flux d.It seems necessary to determine the accuracy level to prove the results' reliability. Let us now determine the quantitative change in the oxide additives in the alloy with a change in the average flux dispersion.Based on the data in Table 1, it is possible to write the followings: In this case, the system of algebraic equations will have the form: The solution to this system of inhomogeneous algebraic equation by the Gauss or Cramer's method using the Maple13 software package, we obtain the following desired roots of the equation: Based on this analytic solution, it is possible to conclude that it unambiguously determines the quantitative change in the oxide additives in the alloy with a change in the average flux dispersion d. In further research, developing and implementing a mathematical model of experimental data is of interest.In this case, the task is to determine the decrease in the amount of hydrogen (cm 3 /100g) with an increase in temperature ( 0 С) during the loading of the charge into the liquid bath [22]- [26]. At the same time, it should be noted that in the mathematical modeling of experimental data, the integrity and uniqueness of experimental data are important; in other words, the need to establish the uniqueness of a function to determine the law of change of a parameter depending on another parameter during its natural change is significant.From this point of view, during the developing analytical expressions, the definition of a function with one variable is considered the main task: the development of an unambiguous connection function.The advantage of the developed mathematical function is that it includes not only experimental results but also creates the possibility of determining subsequent experimental data without expensive experiments [25]- [27]. Continuing the above research, we will define the following functions for this case analytically. 1. Determinations of the connectivity function of the decrease in the quantity of hydrogen (cm 3 /100 g) and the increase in temperature ( 0 С) during loading the charge into the liquid bath. Based on the experimental data, which is given in In this case, the charge temperature is a natural variable.Table 2 gives five temperature values, so the degree of the desired polynomial will equal 4. Depending on this, the system of algebraic equations has the form: In this case, the following connectivity function was obtained to determine the decrease in the quantity of hydrogen (cm 3 /100 g) with an increase in temperature ( 0 С) during the loading of the charge as: Depending on the coefficient, the inhomogeneous system of the algebraic equation has the form: The followings roots of this system of algebraic equations are determined by the inverse matrix method: Depending on the obtained roots of the equations, we obtain the following connectivity function the change in aluminum oxide in the alloy E (%) with an increase in temperature Т ( 0 С) during the loading of the charge into the liquid bath: Depending on the numerical values, the nonhomogeneous system of the algebraic equation has the form: As in the previous case, using the method of an inverse matrix, the following roots of this system of algebraic equations are determined: Thus, the following connectivity function was obtained to determine the change in hydrogen content O (cm 3 /100 g) in the alloy depending on the holding time t (minute) of the heated charge during the research: 4. Now, let's move on to determining the regularity of changes in oxide content K (%) depending on the holding time (minute) of the heated charge during the research.We present the experimental data following Table 3. For this case, the form of the nonhomogeneous system of the algebraic equation has the form: As a result, the following function was obtained to unambiguously determine the regularity of changes in oxide content K (%) depending on the holding time (minute) of the heated charge during the research period: In general, the analytical studies make it possible to say that the degrees of connectivity functions in the form of polynomials ( 3), ( 6), ( 9), ( 13), ( 17), ( 21) can be increased based on the numerical values of experimental researches given in tables.It can be seen that in the connectivity functions, due to the coefficient value in front of the highest degree of the polynomial variable, it will be possible to obtain accurate data of 3 4 10 ..10 .This means that the error can be approximately zero, allowing a high degree of accuracy.In addition, based on the obtained connectivity function dependencies, using preliminary tabular data, it seems possible to obtain and other results without experimental research. In concluding this section, it is possible to note that it seems useful to use integral equations for further research.In particular, for mathematical modeling of the heat transfer process during gas or electro-arc melting of aluminum alloys for each alloy layer, the law of conservation of energy can be written as the following integral equation: . After some transformations, when passing to the limit case , Fqv y t is a function of the density of heat sources, which characterized the change in the energy influx in each internal layer of the alloy. By integrating the differential equation ( 22) under exact initial and boundary conditions, it is possible to mathematically estimate the heat transfer process during the melting of aluminum alloys. Results and Discussion This section presents tables and graphs of changes in the main desired parameters based on the developed mathematical functional given in the previous section.In particular, table 1 gives the results of the experimental data, which determined the degree of dependence of hydrogen and oxide content in the alloy formed during the melting of the charge, depending on the average dispersion of the loaded flux.4. based on functional dependencies, graphs of changes in the main desired parameters and numerical values in tabular form for engineering and applied calculations are constructed.In particular, graphs of the change in the content of hydrogen and oxide inclusions in the alloy with an increase in the average dispersion of the flux d, graphs of the change in hydrogen amount with an increase in temperature during loading of the charge into the liquid bath, the change of aluminum oxide amount in the alloy E (%) with an increase in temperature Т ( 0 С), the regularity of change in hydrogen content in the alloy O (cm 3 /100 g) and percentage of oxide K (%) depending on the holding time (minute) of the heated charge during the research period are plotted. Таble 1 .Fig. 1 .Таble 2 .Fig. 2 .Таble 3 .Fig. 3 .Fig. 4 . 2022 Fig. 5 .Fig. 6 . Fig. 1.The graph of the change in the amount of hydrogen in the alloy with an increase in the average dispersion of the flux d.The following table of experimental data: Таble 2. Changes in the amount of hydrogen with increasing temperature during loading of the charge into the liquid bath The temperature of the charge during the loading its into the liquid bath, 0 С 3 . Obtained the connectivity function of the change in aluminum oxide in the alloy E (%) with an increase in temperature Т ( 0 С) during the loading of the charge into the liquid bath; the connectivity function is obtained to determine the change in hydrogen content O (cm 3 /100 g) in the alloy depending on the holding time t (minute) of the heated charge during the research period. Table 2 For this case, the roots of the system of algebraic equations are determined by the Jordan-Gauss method and get:
3,819.2
2023-01-01T00:00:00.000
[ "Materials Science" ]
Identification and Conservation State of Painted Wall Plasters at the Funerary House in Necropolis of Tuna El-Gebel , El-Minia-Upper Egypt During this study, the principal aim carried out was to obtain more information about technique and conservation conditions of the Egyptian wall paintings during the Roman period in the funerary house in necropolis of Tuna el-Gabal, El-Minia-Upper Egypt. It’s going back to 2 century AD and involves different sites of Ptolemaic and Roman chapels; some are in the immaculate established style while others are a blend of Pharaonic-Greek style and both are secured with mural painting. Deterioration problems observed on the wall paintings of the funerary house are, loss of plaster layers, disintegration of plaster layers, loss of paint layers (blistering and peeling), discoloration and severely damaged owing to a lot of deterioration factors as weakness of mud brick support, deterioration of surface treatments and to the widespread presence of different salts. The materials used in the painting, preparation layers and the state of conservation of the mural painting at funerary house were investigated by integrated physio-chemical measurements, particularly micro-Raman spectroscopy (μRaman), light optical microscopy (LOM), scanning electron microscopy (SEM) coupled with an energy dispersive X-ray analysis system (EDX), X-ray powder diffraction (XRD) and Fourier transform infrared (FT-IR). In addition, the morphology of multilayer plaster from wall painting was investigated using atomic force microscopy (AFM). A wide color palette utilized as a part of the necropolis has been identified with mineral pigments and pigment mixtures. It is found that, the paints were based on an organic binder and traditional pigments (azurite, hematite, ochre, vegetable black) were used as colorants on plaster. The examination demonstrated that the preparatory layer is verging on made of pure lime while the plaster layer based mainly of lime and gypsum with variable amounts of quartz. The obtained results provided information about the painting technique, chemical How to cite this paper: El-Tawab Bader, N.A.A. and Ahmed, A.E. (2017) Identification and Conservation State of Painted Wall Plasters at the Funerary House in Necropolis of Tuna El-Gebel, El-MiniaUpper Egypt. Open Journal of Geology, 7, 923-944. https://doi.org/10.4236/ojg.2017.77063 Received: June 4, 2017 Accepted: July 11, 2017 Published: July 14, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access N. A. A. El-Tawab Bader, A. E. Ahmed 924 composition, crystal structure in addition to the stratigraphy of the paint layers and the state of preservation and on the causes of the painting deterioration. Furthermore, the obtained results can be used in the conservation and restoration interventions of these sites. Introduction The Ancient Egypt site of Tuna el-Gebel is outskirts of Amarna and the capital of the pharaoh Akhenaten is one of several necropolises of ancient Hermopolis (Modern Ashmunein); it situated about 300 km south of Cairo, in Middle Egypt on the western side of the Nile, west of the modern village of Deirut.It is the necropolis of Hermopolis Magna, ancient capital of the 15th Nome and cult centre of Thot, god of writing and sciences [1].The ruins of Tuna el-Gebel are scattered over an area of about three kilometers.The tombs from the Tuna el-Gebel necropolis are generally dated to the first half of the 2 nd century AD and were discovered in 1919 [2].Most of tombs at necropolis were built of mud brick and only the doorjambs were of limestone.Very important mural paintings decorate the tombs and funerary house and the connected rooms of the tombs.The paintings inside the tombs comprise funeral scenes with ancient Egyptian gods, cartouches with hieroglyphic inscriptions, geometric and floral decorations and imitation of marble revetment (Figure 1 this is clear from the restoration work that has been done in the region, which led to a further deterioration of these murals.Although they have been recently restored, the paintings appear severely deteriorated.The murals in all the tombs at Tuna El-Gebel necropolis are in a critical state of conservation.It has been exposed to a wide variety of natural and human threats.These include moisture infiltration/condensation, salt deterioration, devastating fires, bat infestations, detrimental reuse and, in more recent times, the irreversible impact of mis- to set up a scientific conservation and guide for a conscious intervention on these paintings.For this purpose, different optical and analytical techniques were used. Sample and Sample Preparation Sampling strategy is the first step in the analysis process.A representative sampling from Tuna El-Gebel tombs were performed in close collaboration with mural painting conservator-restorers after careful examination of the painted surfaces.Samples were collected from detached and fallen parts of the paintings, it was cared to collect samples as small as possible in size.Moreover, attention was given to locate and describe the painting techniques (stratigraphy, pigment). Appropriate representative visible different pigments (blue, green, yellow, red, dark red, brown, black and white) were collected and each of the samples was labeled.To identify the stratigraphic characterization of polychrome surfaces, polished cross-sections were carried out.Fragment samples were embedded in Epoxy resin and mounted on glass slides and the cross section is obtained by polishing the embedded sample with abrasive disks using silicon carbide card with successive grid from 120, 400, 800, to 1000. Different series of laboratory tests were applied to samples to determine their basic characteristics. Light Optical Microscopy (LOM) Paint fragment and polished cross sections were examined using Leica DM 100 stereomicroscope under normal reflected light at 40× to 100× magnification. The photomicrographs recorded with a Leica EC3 12-megapixel digital camera. Atomic Force Microscopy (AFM) AFM has been applied to the examination of art and archaeological objects and gave a topographic map of the surface of an altered archaeological surface.The use of an atomic force microscope (AFM) for rapid assessment of the state of ground layer has been investigated and detail of surface topography for this sample was given.The study established an AFM imaging technique that produces data representative of weathering rates of ground layer under a range of weathering regimes of varying severity.The effect of scan size on the average roughness parameter was investigated.The AFM maps of the topography of a substrate by monitoring the interaction force between the sample and a sharp tip attached to the end of a cantilever so that the morphology of the surface of the studied sample can be reproduced at nanometer resolution [3].The atomic force microscope (AFM) is designed to provide high-resolution (in the ideal case, atomic) topographical analysis, applicable to both conducting and non-conducting surfaces [4].Atomic Force Microscopy (AFM) was developed by Binning, G. et al. in 1988 [5] to measure forces as small as 10 −18 N. Surface topography information was obtained by cutting a 12 mm diameter disk from the ground layer, and using Wet-SPM (Scanning Probe microscope) Shimadzu. SEM-EDAX Semi quantitative analyses of elemental composition were obtained using an EDX, X-ray energy dispersive spectrometer analyzer with at an acceleration voltage of 200 v -30 keV. X-Ray Diffraction Mineralogical structure of support and plaster layers were determined by X-ray diffraction (XRD) analyses performed by PW 1840 diffractometer equipped with a conventional X-ray tube, CuKα radiation, 40 kV, 25 mA, point focus. Micro-Raman Spectroscopy (µRaman) The collected paint samples have been examined with µRaman spectroscopy, which is a versatile technique of analyzing both organic and inorganic materials that has experienced noticeable growth in the field of art and art conservation in parallel to the improvement of the instrumentation [6].Raman spectroscopic studies of wall paintings and frescoes have addressed the composition of pigments and pigment mixtures, their interactions with their substrates [7].Raman spectroscopy is a spectral analysis of light scattered from a sample being bombarded by a monochromatic (LASER) light beam [8].The use of Raman spectroscopy as a technique for the characterization of mineral pigments on historiated manuscripts and wall-paintings has been demonstrated for several scenarios [9].Components of pigment samples have been probed using Raman microscopy.Micro-Raman spectra were recorded using a laser power at the sample of about 0.3 m Wand and objective lens of 50× magnification of an Olympus microscope.The Raman analyses have been carried out with a Senterra (Bruker) and a laser at 785 nm, at a power ranging from 5 -25 mW according with the thermal stability of the compounds to be investigated and Charge Coupled Device (CCD) (−65˚C) detector cooled by the Peltier effect at 200˚K.A Jasco 2000 spectrometer combined with an Olympus microscope, cooled with liquid nitrogen has been also used with a laser at 1000 nm and a power range lower than 50 mW.The spectra have been generally recorded between 1200 and 150 cm −1 .The possible presence of organic substances has been studied at high wavenumbers, whereas the inorganic compounds, oxides and sulphides, have been investigated at lower wavenumbers.For each paint fragment or layer, an average of 30 particles has been analyzed, with a variable collection time according to the magnitude of the scattering signal.It was attached with Olympus U-TV1x-2 camera. FTIR Spectroscopy (FTIR) The binding media of the pain layers was identified by Fourier transform infrared spectroscopy (FTIR) using JASCO FTIR-460 spectrophotometers. Painting Layer Structure Observation of the prepared cross sections through an optical microscope shows that, there are two or three clearly differentiated layers: an internal layer or mural substrate, fine layer and a pictorial film. Painting Layer Morphology The AFM maps the morphology of the surface of the plaster (Figure 5) indicated that the surface samples were very rough with the formation of an extensive array of bumpy features.Respectively, progressive pitting and surface damage of the samples suggested that the surface area was subjected to many deterioration aspects.In addition, etching was assumed to remove weak boundary layers from the surface.Because of etching, roughness of the surface increased.Rms roughness data, given as standard deviation over all height values within the surface area of interest, were determined from 1 μm 2 AFM images.The rms values were given in the (Figure 6).The aggregates of rough plasters with particle size between 1033004-184746 NM composed the largest fraction of the total aggregates. Aggregates bigger than 476 nm and the smaller than 189 nm are composed the smallest fraction of the total aggregate).This shows that mostly fine aggregates between coarse (Greater than 1.03 mm.) and very fine (less than 0.18 mm) were used in the preparation of the rough plasters.The aggregates used in the bottom plasters are semi-circular in shape and they are mainly white and semi-opaque. Composition of Painting Layer XRD patterns of the rough plasters (Figure 7) indicated that, the main constituent is calcium carbonate calcite (CaCO SEM-EDS microscopies (Figure 8) show that the finishing plaster below the painting layers consisted of high amounts of Ca, S, Si (50.81, 24.92, 16.32) and a small amount of Al, Fe, Ti, Sr (Figure 8).Considering these results, it can be claimed that plaster samples were prepared by using lime and gypsum as binder and quartz as aggregates. Paint Layer Inorganic pigments have been identified on different painting layers.Figure 9 shows optical microscope images of the paint samples under the reflected light. The Identification of the samples was examined using µRaman spectroscopy and SEM-EDX analysis; Table 1 provides a summary of the EDX results. Blue Pigment The stereomicroscope examination shows variety in color ranging from deep and light crystals spread within a transparent matrix and rich in detachment part, white stains and sand grains (Figure 9(a)).µRaman spectra (Figure 10) show the characteristic bands of Egyptian blue (calcium copper (II) silicate (symmetric stretching, ν1), is attributed to calcite (CaCO 3 ).Few spectra at 200 and 400 cm -1 are probably due to the presence of quartz and amorphous silica. The SEM-EDS microscopies of the blue pigment (Figure 11), shows that the peaks of Si (54.25%),Ca (22.04%) and Cu (14.54%) are present, and their atomic percentage ratios agree with the formula of cuprorivaite. EDX pattern of pigment Various analyses suggest that, Egyptian blue was manufactured by heating together silica sand (SiO 2 ), copper alloy filings or ores (copper compound malachite (Cu 2 [(OH) 2 , CO 3 ,]) or azurite (CU 3 [(OH)/CO 3 ]2), calcite (CaCO 3 ) and potash or natron, and was commonly found in artefacts from the 4 th Dynasty onwards Lucas A. 1962 [10].Its earliest recorded use was in the IV Dynasty (2613-2494 BC) and its use lasted throughout the dynastic period and continued into the Roman period [11]. Green Pigment Under stereomicroscope, the green paint area shows various shades with yelloworange and blue particles scattered in the matrix.The green particles are pale and some are faded green, combined with the detachment of pigment particles in some areas.The Raman spectrum of the green pigment (Figure 12) represent a typical peak of cuprorivaite at 1092, 1040, 501, 433 cm −1 , Moreover, the strong band at 632 cm −1 and 240, 550 cm −1 indicates a well-crystallized Goethite. EDX analysis (Figure 13) revealed the presence of iron (Fe), copper (Cu), Silicon (Si) and calcium (Ca), these means, the painter carried out the green pigments by mixed of Egyptian blue and yellow ochre. Green was obtained from powdered malachite or from an artificial green frit. In some instances, the green color was acquired by mixing Egyptian blue with yellow ochre.Such a technique of obtaining green, which appeared sporadically during the XIIth Dynasty (1991-1786 BC) became much more widespread during the Amarna period (1370-1352 BC) [12].Howell G. 2004 mentioned that, one of the green pigments specimen from Greco-Roman double coffins formed from an admixture of yellow and blue pigments [13]. Yellow Pigment On careful observation under the stereomicroscope, the yellow pictorial layer appeared to have different shades of yellow that can be described qualitatively as bright-yellow, orange-yellow and brownish-yellow and showed small and keV 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 Raman Intensity Green Pigment rounded grains of the pigment and the hue is affected by impurities found in the layer.EDX analysis of the sample (Figure 14) analyzed from the yellow paint layer indicated that iron (Fe) was abundant element, with some traces of Al, Si and K. Yellow as a token of the Sun and as the color of happiness and prosperity yellow was an important color [12]. The Raman spectra of the yellow pigment indicates that it is like the ancient yellow ocher, yellow ochre is identifiable through the characteristic bands at 1406, 1278, 1200,1086, 1007, 635, 415, 278, 222 (Figure 15).The only yellow pigment available in ancient and prehistoric art was yellow ochre (hydrated oxide of iron, Fe 2 O 3 •H 2 O) which was not totally satisfactory in view of its pale hue [14]. Red and Brown Pigments Field observation and stereomicroscope indicated three color grades of red color in Tuna El-Gabel necropolis; red, brown, dark red (Nbiti).XRD analysis indicated that, iron is responsible of all degrees.Iron (III) oxide, hematite, occurs in almost all red pigments studied here, often in admixture with calcite and carbon to give lighter and darker colors.The hematite is not found as a pure mineral pigment in red samples but occurs in admixture with clays and sand to give red ochre.Spectroscopically, it is possible to detect the presence of red ochre through the increased bandwidths of the hematite bands (224.86,293.24, 411.16, 614.21 cm −1 ) (Figure 16, Figure 17) caused by the interactions of the mineral with the aluminosilicate clay matrix. EDX analysis of red pigment (Figure 18) shows that the peaks of (Fe, Si, Al, and K) are present, the presence of quartz from Fine River sand which was sometimes added to assist in the preparation and grinding of the pigment mixture. Dark Red Pigment The field observation indicated that the dark red color provides a decorative transition from the dark yellow to the brown-black and then to the light-yellow background.Under the stereomicroscope the sample illustrated very dark particles tend to red and black particles.EDX analysis reveals that, the sample was composed of Fe and C (Figure 19). The dominant Raman band on this pigment remains at (224, 291 cm −1 ) typical peaks of hematite.The Raman spectrum recorded on black grains in the brown paint layer shows bands at (1376.22,1011.52 cm 1 ) (Figure 20).Wavenumber cm −1 Figure 17.µRaman spectrum of brown pigment identifies red ochre.Calcite (weak), µRaman spectrum of carbon particles in admixture with red ochre to produce a dark red (nbiti) pigment. White Pigment Optical microscopic investigation shows the white paint layer is slightly thick with different chromatic hues range from red to yellow, rich in voids and covered with dust.EDX analysis (Figure 21) confirmed the presence of elemental All outcomes acquired from analyzed of mural painting specimens from Tuna El-Gabel are listed in Table 2. Composition of the Binder In the FTIR spectrum (Figure 23) of the organic binder, a typical stretching of alcohols bands is present in the OH stretching bands at 3403 cm −1 , the alcohols keV 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 CO − and sulphate ( ) SO − at 1143 cm −1 was indicated (Table 3). Conclusions This study was carried out the end goal to frame a database of the application technique and material properties of the mural painting of Tuna El-Gabel necropolis with the motivation behind preservation. The field observation and lab analysis indicated that, the mural painting in Most of tombs at necropolis were built of mud brick and only the doorjambs were built of limestone and the walls were plastered with mud mortar tempered with chaff, and given a coat of lime-and-sand plaster which acted as ground for the painted decoration. The different techniques of examination and analysis provided us with information about the identification of the mural painting layers at Tuna El-Gabel necropolis as follows: There are two or three clearly differentiated layers of the mural painting at the necropolis. Coarse plaster layers have a denser coat than fine layer and applied in varying thicknesses in multiple layers to build up and level the brick wall.It composed of SO − bending band of gypsum.700 -600 cm lime, Gypsum, aggregate and straw for reinforcement. Fine plaster layer contains high amount of lime and more little sand and doesn't contain strew. Arabic Gum was used as a binder in the preparation paint layer in mural painting. Many pigments were used and executed in tempera technique on dry surface rendering.For the pigments components, µRaman spectrum and EDX data showed that different pigments had been employed: -White pigment sample consists of a mixture of calcite CaCO 3 and gypsum CaSO 4 •2H 2 O. -Black pigment contains carbon C. -Green pigment is mixing of cuprorivaite and Goethite FeOOH. -Red and brown pigments contain hematite Fe 2 O 3 . -Dark red pigment consists of hematite and traces of carbon. -Yellow pigment is yellow ochre. -All pigments were mixed with Arabic gum as binder. The field and lab investigation indicated that, the mural paintings in Tuna El-Gabel necropolis is suffering from a lot of deterioration aspects.The serious deterioration phenomenon of the tomb is a clear detachment between the rough layer and the mudbrick wall and a lot of wall paintings were completely losing due to the loss of adhesion between the mudbrick support and the plaster.The problem is more complicated by the existence of salts.Deterioration observed on the mural painting were formed by the thermal alteration in the building. Detaching layers of the paintings must be consolidated with appropriate materials and methods after carrying out experimental studies (modified lime base mortar is recommended). Restoration of the paintings must be performed after the consolidation of the building.Paintings must be covered with suitable methods in order to avoid any damage that can occur during the restoration of the building. The restoration should be followed by a long-term conservation plan according to scientific and analytical study. Figure 1 . Figure 1.Shows mural paintings decoration at the tombs and funerary house at the necropolis of Tuna el-Gabel (a) geometric decorations and imitation of marble revetment; (b) floral decoration. Figure 2 .Figure 3 . Figure 2. The critical state of conservation of painted surface at the necropolis of Tuna el-Gabel: (a) fallen, losing and separation of painted layer; (b) extensive network of micro-cracks. Figures 4(a)-(d) illustrates the pictorial strata of mural painting sample subjected for the study through an optical microscope following: (1) Mud brick support.(2) Inner (bottom) layer which was applied over the wall structure which is white in color, course in texture and was applied in varying thicknesses and in multiple layers to build up and level the mudbrick wall.In some tombs, this layer is reinforced with strips of chopped straw the strip length could be from 300 to 500 μm, according to the LOM investigation, the straw could be from Barley straw (Figure4(c)).(3) A second white plaster (fine layer) was used as a finishing coat and provided the ground for the first mural scheme.It was characterized by surfaces with variable cavities and another sample reveals homogeneous surface composed of fine grains.In some areas, below the main paint layer, over painting technique was observed, the painters put the rough plaster at the mud brick tombs, paint layer with yellow pigment, plaster layer mixed with fine particles of yellow paint and the final is paint layer of the tomb (Figure 4(d)).(4) Paint layer. Figure 4 . Figure 4.The layer distribution of mural painting at Tuna El-Gabel (a) cross-section images of the stratigraphy of mural painting; (b) the stratigraphy of mural painting under reflected light; (c) chopped straw mixed with rough layer; (d) over painting technique. Figure 5 . Figure 5. Topographic map of the surface of plaster samples obtained by AFM. Figure 6 . Figure 6.RMS surface roughness of the plaster sample. Figure 12 . Figure 12. µRaman spectrum of green pigment showing cuprorivaite and goethite in combination. Figure 16 . Figure 16.µRaman spectrum of red ochre pigment diluted with calcite to produce a light red showing signatures of red ochre with Raman band at 645, 411, 220 cm −1 . Table 1 . Elemental analysis of mural painting component from Tuna El-Gabel necropolis. Table 2 . Summary of the outcomes acquired from mural painting specimens from Tuna El-Gabel Table 3 . FTIR results of binding medium paint layer in Isis temple.
5,147.8
2017-07-13T00:00:00.000
[ "Materials Science" ]
Efficient cell pairing in droplets using dual-color sorting 2 3 The use of microfluidic droplets has become a powerful tool for the screening and manipulation of cells. However, currently this is restricted to assays involving a single cell type. Studies on the interaction of different cells (e.g. in immunology) as well as the screening of antibody-secreting cells in assays requiring an additional reporter cell, have not yet been successfully demonstrated. Based on Poisson statistics, the probability for the generation of droplets hosting exactly one cell of two different types is just 13.5%. To overcome this limitation, we have developed an approach in which different cell types are stained with different fluorescent dyes. Subsequent to encapsulation into droplets, the resulting emulsion is injected into a very compact sorting device allowing for analysis at high magnification and fixation of the cells close to the focal plane. By applying dual-color sorting, this furthermore enables the specific collection and analysis of droplets with exactly two different cells. Our approach shows an efficiency of up to 86.7% (more than 97% when also considering droplets hosting one or more cells of each type), and, hence, should pave the way for a variety of cell-based assays in droplets. 2B), the size of the restricted sorting channel was 40 µm × 40 µm (height × width) × 475µm (length).The main channels before the restriction channel were 75 um in height and width. For the collection chip (Fig. 2F), the height of the low layer chamber was 40 µm and the size of the upper layer trap was 100µm in diameter × 100um in height.All microfluidic devices were fabricated using standard soft-lithography 3 .Molds were fabricated on silicon wafers using SU-8 resist (Microchem) and patterned by exposure to 375 nm light through 25400 dpi patterned masks (Suess).A mixture of 90% Polydimethylsiloxane (PDMS) elastomer (Sylgard 184 polymer base; Dow Corning) and 10% (w/w) curing agent (Dow Corning) was poured over the SU-8 molds, degassed and incubated at 65 degree overnight.Polymerized PDMS was peeled off from the mold activated by incubation for 1 min in an oxygen plasma oven (Diemer Femto) and bound to a 50 × 75 × 0.4 mm ITO glass (Delta Technologies). Inlets and outlets were punched using 0.5 mm diameter biopsy punches (Harris Uni-Core) for electrodes and 0.75 mm diameter biopsy punches for the rest.The channels were first flushed by Aquapel (PPG Industries) and, subsequently, by HFE7500 oil (3M). Cell cultivation and encapsulation Her2 Hybidoma cells (ATCC® CRL-10463) were grown in complete DMEM medium (Gibco), Jurkat cells (ATCC® TIB-152) were grown in RPMI medium (Gibco), both supplemented with 10% FBS.Hybridoma cells were harvested, stained by Calcein-AM (Lifetechnologies) and Calcein Violet (E-bioscience), respectively, at room temperature for 45 min, washed by PBS twice to remove free dye in the media , and re-suspended in free style media (Gibco) supplemented with 1 mg/ml xanthan gum (Sigma) to prevent cell sedimentation during encapsulation.Subsequently, green and violet cells were mixed equally at a final concentration of 1.5 × 10 6 cells/ml and injected at a flow rate of 1000 µl/h into the droplet generation chip.Droplets were generated by flow focusing this continuous phase using Novec HFE7500 oil, containing 5% PEG surfactant 3 (custom synthesized at Sigma Aldrich), at a flow rate of 4000 ul/h.Emulsions were collected in a collection tube (cryotobube, Nunc) which was treated with Aquapel (PPG industries) and, subsequently, rinsed by HFE7500 oil. Sorting and Imaging Emulsions were re-injected using an electro-osmotic pump (Nano Fusion Technologies) at a flow rate of about 60 µl/h.Oil with 0.5% and 0.25% of PEG surfactant were loaded in syringes individually and injected by Harvard Apparatus PHD 2000 syringe pumps at a flow rate of 400 µl/h (Fig. 2B, (a)&(c)) and 600µl/h (Fig. 2B, (d)) respectively .A refilling pump was connected with outlet E (Fig. 2B, (e)) to withdraw all of the droplets that did not trigger sorting to the waste syringe at a flow rate of 760 µl/h.Droplet sorting videos were acquired at ~500 frames per second.A customized LabVIEW sorting program was used to control the droplets sorting.The positive droplets were collected in the collection chip (Fig. 2F-H) and the trapping events were monitored on a cell imaging device (CytoMate Inc.).The collection was finished when all of the traps were occupied.Subsequently, the collection chip was rinsed with oil containing 0.25% PEG surfactant to remove un-trapped droplets. Sorting enrichment was determined by automated scanning of the entire collection chip at 10-fold magnification using an inverted fluorescence microscope (Nikon eclipse Ti), equipped with a motorized stage and a Hamamatsu Digital camera. 82 Table S1 S1.Flowchart summarizing the logic of the LabVIEW control software programmed at EMBL, Heidelberg.This algorithm runs in parallel for both of the PMT channels (one for each colour) and detects peaks in the signal values.This allows cells within droplets to be detected and for a sorting decision to be made for each passing droplet based on the intensity of the signal, the number of peaks detected, the width of the overall peak and the spacing between droplets that contain at least one cell.This software and a user manual can be freely downloaded for academic use at www.merten.embl.de/index.html.Imaging is performed using an inverted microscope equipped with a high speed camera. Fig. S3 . Example of the signal peaks in one droplet.The zoom in (inset) reveals a jigsaw shape of the signal at low intensity, thus making the use of inflection points for the detection of peaks impossible. Fig. S4. Signal variation of Calcein-AM and Calcein-violet stained cells inside droplets. (A) Droplet showing one green peak and two overlapping violet peaks, corresponding to a clump of 2 violet cells, with a valley between the two peaks above a value of 0.5 fluorescence units.This value is higher than the green peak of another droplet (B) hosting exactly one green and one violet cell.Therefore using static thresholds (solid black lines) is not sufficient to accurately detect the number of encapsulated cells.However, when specifically detecting drops in the fluorescence signal exceeding the maximum noise (red dots), the number of peaks can be correctly determined, independently of the peak intensities. Fig. S2 . Fig. S2.Schematic of the optical setup.The fluorescence-based sorting setup uses diode lasers with excitation wavelengths of 405 nm (Calcein Violet), 488 nm (Calcein-AM) and 561 nm (optional third laser for assay readouts).Emission signals are detected using PMTs with a 450 nm band-pass filter (blue), a 521 nm band-pass filter (green), and a 610 nm longpass filter (red).Sorting signals are processed using LabVIEW software running on a FPGA card triggering a high voltage amplifier.Imaging is performed using an inverted microscope equipped with a high speed camera. Figure S5 . Figure S5.Leakage of Calcein Violet from cells encapsulated into droplets.(A) Zoom in of the fluorescence signals over 8 hours incubation at room temperature.The strongly decreased scale of the Y-axis (from 0 to 0.05 A.U.) allows illustrating the increase in the droplet signal (wide peaks), but requires cropping of the cell signals (narrow subpeaks with intensities as shown in (B)).(B) Time course of fluorescence signals of droplets hosting Calcein Violet-stained hybridoma cells.After incubation for the indicated time periods off-chip, the droplets were reinjected into the sorting device and the fluorescence signals were determined in the detection channel using a PMT.. (C) Fitted LOESS smoothing line of droplet fluorescence intensities (turquoise line), individual data points (turquoise circles) and confidence (grey shades) of the droplet signals.(D) Fitted LOESS smoothing line of cell fluorescence intensities (red line), individual data points (red circles) and confidence bands (grey shades) of the cell signals.(E) Intensities of cell and droplet signals plotted at the same scale. Fig. S7 . Fig. S7.Efficiency of the sorting process for droplets hosting differently stained Her2 Hybridoma cells.Blue fluorescence of droplets captured in the collection chip before (A) and after (C) sorting.Green fluorescence of droplets captured in the collection chip before (B) and after (D) sorting. Fig. S8 . Fig. S8.Fluorescence analysis of the droplets detected by PMT.(A) Two dimensional dot plot of fluorescence signals of droplets.The red arrow indicates one example of the dual color droplet with two cells.(B) Dot plot showing violet and green signals of the droplets.(C) Droplet occupancy before sorting . Fig. S9 . Fig. S9.Efficiency of the sorting process for droplets hosting differently stained Jurkat cells.Blue fluorescence of droplets captured in the collection chip : Sorting results.ND = not detectable
1,939
2015-09-29T00:00:00.000
[ "Biology" ]
Enhanced Electrochemical Treatment of Phenanthrene-polluted Soil using Microbial Fuel Cells In this study, tubular microbial fuel cells (MFCs) were inserted into phenanthrenecontaminated water-logged soil in order to evaluate their treatment efficiency and overall system performance within 60 days’ incubation period. At day 10, phenanthrene degradation rates were found to decrease with increasing distance from the anodes from 50-55 % at 2 cm to 38-40 % at 8 cm. Bromate (used as a catholyte) removal in both MFCs was about 80-95 % on average which is significantly higher than the open circuit controls (15-40 %) over the 60day period. Total chemical oxygen demand removal (72.8 %) in MFCs amended with surfactants was significantly higher than MFCs without surfactant (20 %). This suggests that surfactant addition may have enhanced bioavailability of not only phenanthrene, but other organic matter present in the soil. The outcomes of this work has demonstrated the simultaneous removal of phenanthrene (86%) and bromate (95%) coupled with concomitant bioelectricity generation (about 4.69 mWm) using MFC systems within a radius of influence (ROI) up to 8 cm. MFC technology may be used for in situ decontamination of soils due to its potential detoxification capacity and could be deployed directly as a prototype-MFC design in field applications. 38 associated with soil contamination resulting from waste disposal and leakage from storage tanks or during its transportation from one point to another. Petroleum hydrocarbons are known to adversely affect human health and render soils hazardous through contamination with BTEX and PAH compounds (Wang et al. [1], Guo and Zhou [2], Sarkar et al. [3], Zhou et al. [4]). A recent report on oil contamination in Ogoniland in Nigeria revealed the benzene and PAHs levels was 1800 and 500 times higher than WHO standards respectively. Two spills in Ogoniland required about $30 billion for clean-up operations over 30 years (UNEP [5]).The cleanup of these contaminants is expensive, especially using physical and chemical methods. The use of biological methods such as in situ bioremediation, is a relatively inexpensive, non-intrusive and ecofriendly methods of treating such contaminants in sediments and soil environments. Soil microbial fuel cells (sMFC) are a new technology for remediation of soils contaminated with organic compounds without need for any introduction of donor or acceptor into the soil or subsurface environment (Morris and Jin [6], Zhang et al. [7]). Electrodes in sMFCs can provide a less-expensive and easy passage of electrons from the anode to the cathode (which does not corrode over long-term deployment) thus stimulating the anaerobic oxidation pollutants (Morris et al. [8], Zhang et al. [7],Wang et al. [1]). Moreover, electricity production during MFC operation is an indication of substrate biodegradation which can be used to provide energy needed for operating online monitoring wireless sensors (Donovan et al. [9]). A few studies on bioelectrochemically-assisted soil/sediment bioremediation have been reported for phenol (Huang et al. [24]), BTEX compounds (Zhang et al. [7]), petroleum hydrocarbons (Morris and Jin [6]) and PAHs (Wang et al. [1]). Morris and Jin [6], reported TPH degradation rates in the sediment was about 12 times higher than the baseline control. Wang et al. [1] also observed a significant increase in TPH degradation (i.e. from 6.9% to 15.2 %) in U-tube like soil MFC, especially water-logged close to the anode but a decrease in system performance was observed with increasing distances from the anode and decreasing moisture content. Despite the previous studies on petroleum hydrocarbon removal using MFC of various configurations, to the best of our knowledge, there has been no report that explored the possibility of coupling the use of cathodic electron acceptors other than oxygen such as hydrogen peroxide, potassium persulfate among others, with petroleum hydrocarbon degradation using a tubular soil MFC bioreactor design. Bromate has previously been demonstrated in previous studies (Adelaja [10], Adelaja et al. [11]) to be a potential electron acceptor at the cathode in lieu of platinum due to the high cost of platinum catalyst and its limited application in Enhanced Electrochemical Treatment of Phenanthrene-polluted Soil … Earthline J. Chem. Sci. Vol. 6 No. 1 (2021), 37-63 39 subsurface anoxic environments. Bromate, a toxic pollutant, has been reportedly found in wastewater treatment effluents, groundwater, stagnant ponds/lakes and the marine environment with high chloride ions concentrations (Zhao et al. [12], Bao et al. [13]). The radius of influence (ROI), which is a distance at which enhancement of biodegradation takes place from the anode, could to a large extent determine the practical deployment of MFCs for in situ bio-treatment of oil-contaminated soil. ROI could largely depend on MFC architecture, physical and bio-chemical characteristics of both pollutants and soil. In this study, newly modified column-type MFCs were developed for to facilitate phenanthrene degradation (a model PAH compound) in soil by varying phenanthrene concentrations relative to the distance from the MFC anodes with concomitant bioelectricity production. The effect of surfactant addition on MFC performance was also investigated. Influence of changes of soil ionic strength and pH were also monitored in order to check their effect on the degradation of phenanthrene. Chemicals and reagents All Chemicals were purchased from Sigma Aldrich (Dorset, UK), Acros (UK) and QiaGen Ltd (Crawley, UK). HPLC Grade solvent reagents including Methanol, Acetonitrile (ACN), already prepared COD reagent (Ficodox Plus™) used was gotten from Fisher Scientific (Loughborough, UK). No further purification was done on chemicals prior to use and reagents employed for this work were of analytical grade (≥99.98 % purity). Soil sample collection and characterization Contaminated soil samples used for soil MFC studies were obtained from Barking, London, UK with a known history of petroleum hydrocarbon contamination. Soil auger was used to collect soil samples at the above location from 5 to 10 cm beneath the soil top layer surface. The soil was collected in airtight plastic bags, transported directly to the lab, spread to dry at ambient temperature for 72hrs (25 ± 3℃) and the debris of plant/animal origin and clay-like materials were removed by sieving with 2mm sieve and clayey material. The samples were thereafter stored at 4℃ prior use. A complete physicochemical analysis of the aliquots sample of the soil was carried out by Forest Research, Surrey, UK. The original soil used is a sandy loam soil with a background phenanthrene levels of 1.950 mgkg analysis of the soil sample is described in the supplementary information. Reactor design and set The soil MFCs were tubular tubes with one sealed end as The inner chamber of the 0.5 cm thick PVC tube (4.5 cm diameter x 40 cm length) made up the cathode chamber (with operating volume of 200 mL) while the anode was fastened firmly onto the outer section of the PVC tube (which had evenly distributed holes of 1 cm diameter) using plastic cable ties, thus leaving the anode side exposed to the hydrocarbon-contaminated waterlogged soil. The evenly distributed holes on the tube allowed cation exchange membrane, CMI-7000 (Membranes International, USA). The cathode chamber was filled with the catholyte potassium bromate solution (1000 mg L Oluwaseun Adelaja, Tajalli Keshavarz and Godfrey Kyazze http://www.earthlinepublishers.com phenanthrene levels of 1.950 mgkg -1 DS (dry soil). The main physiochemical and mineral analysis of the soil sample is described in the supplementary information. nd set-up for tubular soil MFCs The soil MFCs were tubular-like MFC reactors which were constructed using PVC tubes with one sealed end as shown in Figure 1. The schematic diagram of the tubular soil MFC reactor experimental set up r treating a model PAH-contaminated soil. inner chamber of the 0.5 cm thick PVC tube (4.5 cm diameter x 40 cm length) made up the cathode chamber (with operating volume of 200 mL) while the anode was fastened firmly onto the outer section of the PVC tube (which had evenly distributed m diameter) using plastic cable ties, thus leaving the anode side exposed to contaminated waterlogged soil. The evenly distributed holes on the exchange between the anode and the cathode with a cation exchange 7000 (Membranes International, USA). The cathode chamber was filled with the catholyte potassium bromate solution (1000 mg L -1 at pH 5), and sterile Adelaja, Tajalli Keshavarz and Godfrey Kyazze DS (dry soil). The main physiochemical and mineral like MFC reactors which were constructed using PVC The schematic diagram of the tubular soil MFC reactor experimental set up inner chamber of the 0.5 cm thick PVC tube (4.5 cm diameter x 40 cm length) made up the cathode chamber (with operating volume of 200 mL) while the anode was fastened firmly onto the outer section of the PVC tube (which had evenly distributed m diameter) using plastic cable ties, thus leaving the anode side exposed to contaminated waterlogged soil. The evenly distributed holes on the PVC with a cation exchange 7000 (Membranes International, USA). The cathode chamber was filled at pH 5), and sterile Carbon felt was used as both anode and cathode in the experiment (C-TEX 27; Mast Carbon Inc, Basingstoke, UK) with estimated surface area of 156 cm 2 and 96 cm 2 (measured) respectively. All electrical connections were insulated and subsequently coated will a silicone rubber in order to prevent system short circuit and corrosion due to immersion in aqueous medium. External load of 1000 Ω was connected to the MFC and the MFC was incubated at ambient temperature (25 ± 5℃) for 60 days and protected from sunlight. Voltage outputs from the MFCs were monitored in real-time using a data acquisition system (Picolog ADC-24, Pico Technology, UK) which captured voltage outputs every 10 mins throughout the experiments. Water lost via evaporation and during samples collections was duly made up by sterile deionized water at intervals (2-3 days) to maintain the saturated condition. Experimental design In this study, four tubular MFC units (of same design) with two units each installed on two similar rectangular PVC storage containers was used in the experiment setup. In one of the storage containers, the two units installed were different tests (one with surfactant, MFC+S and the other without surfactant, MFC-S) while the other two in the second storage container, were two sets of open-circuit MFC controls each for MFC+S and MFC-S respectively. Two non-MFC (or anaerobic) reactors were used as baseline controls; one with surfactant, E+S and the other without surfactant, F-S. These reactors were operated as previously described in Section 2.2. The original soil (previously described in Section 2.1) was spiked with phenanthrene and manually homogenised using an iron paddle to a final phenanthrene concentration of 1000 mg kg -1 dry soil. Prior to MFC operations, the spiked soils in each storage container were incubated for 14 days for partial aging and enrichment of the indigenous microbial population. Phenanthrene release or bioavailability and migration to the anode's ROI from the soil was tested by adding a non-ionic surfactant Tween 80 (500 mg L -1 ) to the soil. The catholyte (1000 ppm bromate solution) was replaced thrice a month during the operational period. Biological and 42 chemical evaluations were conducted on soil samples by periodically sampling at 2cm, 4cm and 8cm from the anode of the top, middle and bottom layer of soil. The distance from the anode's outer surface to where phenanthrene concentrations are less to that of control is known as radius of influence, ROI. No external inoculum was introduced to the primary microbes in the soil. Petroleum hydrocarbon determination For the analysis of anolytes samples for the determination of phenanthrene concentrations present in the samples, High-performance liquid chromatography (HPLC, Dionex GS50, USA) fitted with a Photo-diode Array (PDA) detector (DIONEX, PDA-100) at 254nm was employed. HPLC conditions employed for analysis of the samples taken from the anode chamber were similar as earlier described in previous studies (Adelaja et al. [11]). The analytical column was a reverse phase column, Supelcosil TM LC-PAH column (150 mm × 4.6 mm). A method as described by Kermanshahi pour et al. [14] was employed in the extraction of phenanthrene present in the soil MFC samples. Degradation efficiencies were evaluated based on the residual phenanthrene (PHE) concentration at the end of MFC operation. In order to determine the amount of phenanthrene present in the solid phase of the soil MFCs phenanthrene extraction was conducted by adding 5 mL of acetonitrile (ACN) to 2 g of collected soil samples in a centrifuge tube as previously described by Coates et al. [15] and vortexed for 5 mins. The soil-solvent mixture was sonicated for 1 h and subsequently centrifuged at 12000 g for 15 mins. The supernatant liquid was filtered through 0.22 µm filter units into 2 mL glass vial before HPLC analyses as described above. Determination of bromate and COD removal in MFCs Spectrophotometric method was employed in the quantitative determination of the removal of bromate at the end of MFC operation (Emeje et al. [16]). Procedure for sample preparations prior analysis at 620nm using a UV-Vis spectrophotometer M 6300 model (Jenway Staffordshire, UK) as followed as described by Adelaja et al. [17]. The percentage bromate removal was calculated based on residual bromated when experiment ended. COD titrimetric procedure was used for determination of the chemical oxygen demand (COD) of the samples in line with Environment Agency (UK) Standard method 5220 D (APHA, [18]). pH, conductivity and total dissolved solid (TDS) measurements The pH of the anodic medium during MFC operations and after each cycle was determined with a Mettler Toledo MP220 pH meter (UK). pH changes in the cathode chamber of MFCs containing bromate as catholyte were also monitored. Oakton PC-700 (Oakton Instruments, UK) conductivity meter was used to measure conductivity and TDS. 1:5 (w/v) soil-deionized mixture of water was used to detect soil conductivity and TDS. Electrochemical characterisation The soil MFCs performance was assessed by measuring the cell voltage and electric current across an external resistance of 1000 Ω with a multimeter connected to a personal computer by a data acquisition system on an hourly basis under normal operating conditions as described by Logan [19]. Polarisation curves were determined by gradually increasing the external resistances from 1 Ω to 1 MΩ with the pseudo steady-state voltage recorded at about 5 minutes. The total internal resistance (Rint) of the soil MFC were determined using the polarisation slope method while current and power densities were executed using standard methods (Logan et al. [19], Fan et al. [20], Sleutels et al. [21]). Cyclic voltammetry analysis The bioelectrochemical behaviour of soil MFCs was examined using cyclic voltammetry with the aid of a Potentiostat-Galvanostat (PG 581, Uniscan Instruments, Buxton UK). The scanned potential was between -600 and +200 mV (Vs Ag/AgCl reference electrode), at a scan rate of 10 mV/s. The working electrode was anode, cathode served as a counter electrode while Ag/AgCl was used as reference electrode (BASi, Germany, 4M KCI, +196 mV versus standard hydrogen electrode (SHE) at 25℃) in a sealed chamber was used as a reference electrode. The bioelectrochemical cell was kept at 30℃ unless otherwise stated. The device was operated remotely through a personal computer (PC) using UIE Chem v3.54 software. Bioluminescence toxicity assays The Microtox standard acute toxicity method was employed in conducting toxicity assays on samples drawn from the soil MFCs before and after MFC operation (Gaudet [22]). Soil samples were centrifuged at 13.2 x g and subsequently a Whatmanfilter (0.22 µm) was used to remove suspended biomass and soil particles. The bioluminescent marine bacteria used for this assay was Vibrofischeri (13938) which was grown, harvested and re-suspended in a sterile 2% w/v NaCl solution before use as described by Adelaja et al. [17] for the assay using a Fluostar Optima luminometer. Data analysis Statistical analyses were carried out using MATLAB software at significant level, P = 0.05. All experiments were done in duplicates and error bars is a function of the standard deviation of the mean. Data were treated through using correlation analysis to determine the degree of data association. Pollutants removal and ROI determination during MF Cooperation Phenanthrene degradation at different distances (2, 4 and 8 cm) from each MFC anode was monitored on day 10, 20, 30, 40, 50 and day 60 during the 60 days MFC operation as shown in Figures 2 and 3. At day 10, phenanthrene removal from soil at 2 cm from the anode's outer surface were 55 % and 50 % for MFC+S and MFC-S reactors respectively, which was 120-293 % higher than the non-MFC reactors (E-S and F+S) respectively ( Figure 2). The observed rapid decrease in phenanthrene after MFC start up may be attributed to the adsorption of phenanthrene near the MFC anodes. Adsorption of phenanthrene near MFC anodes after MFC start up may be attributed to observed the rapid decrease in phenanthrene concentration in the soil. This observation corroborates previous studies conducted by Zhang et al. [7] and Lu et al. [23] where similar observations reported were linked to hydrocarbon adsorption on the electrode's surface. Phenanthrene degradation rates decreased with increasing distance from the anodes from 50-55 % at 2 cm to 38-40 % at 8 cm among MFC reactors. The negative steep slope between the ROI and phenanthrene removal at day 10, as shown in Figure 3, indicates a smaller ROI by MFC reactors. The decrease in the degradation rates may possibly be due to mass transfer limitations and lower activity of electrochemically active microorganisms. However, this limitation was gradually overcome with time from 10 to 60 days of operation as phenanthrene removal increased, especially at locations further away from MFC anodes. The creation of concentration gradient, as indicated by the removal of phenanthrene closer to the anode increased with time, and could have driven mass movement in the bulk electrolyte towards electrode by increasing ROI. There was continuous current production in MFCs over the experimental period which perhaps was sustained through steady mass transfer of the substrate towards the electrode. However, the phenanthrene degradation speeds in test MFCs is more than that 46 of the control systems with better impact of MFCs on degradation regardless of their distances from the anode's outer surface. 1. Consequentially, this may have led to improved mass transfer to the electrode and supported faster degradation rates. Phenanthrene depletion rates increased with period and attained 84.5-91.6 % in MFC+S and 78.3-86.1 % in MFC-S respectively (compared to 37.9-64.1 % in controls) with the phenanthrene fraction remaining in the soil being about the same for entire radial distances from anodes ( Figure 2). There was a statistically substantial difference (at p=0.001) among the MFC+S and MFC-S reactors at all distances from the anode over the incubation period. The enhanced degradation performance observed in MFC+S reactor compared to MFC-S reactor may be attributed to the contribution from the surfactant added in enhancing phenanthrene availability in the soil. The amount of surfactant used up during the study (250 mg L -1 ) seems to enhance the surfactant sorption on soil, which may have caused increased in phenanthrene separating on soil particles. The result of this study conforms with former findings conducted by Lu et al. [23] on the influence of ROI on improved petroleum-hydrocarbon bioremediation in polluted soil in MFCs by means of two different cheap electrodes-biochar and graphite electrodes. In this study, the configuration of the lab-based MFC bioreactor dictates the determination of the ROI for the soil MFCs. However, since MFC performance is a function of measured ROI, therefore calculated ROIs can be extrapolated based on such experimentally-obtained data. The linear regression equations derived for active MFCs at day 10 and day 60 can be used to descriptively explain and predict the extension of ROIs with respect to time in this saturated soil environment ( Figure 3A and 3B). The maximum ROI is the maximum distance from the MFC anode at which phenanthrene removal efficiency is zero percent in reference to the baseline (anaerobic) control. Since day 10 up to 60th day, the estimated maximum ROI augmented from 24-27 cm to 39-41 cm with further extension in maximum ROI predicted with increase in time of soil MFC operation. Lu et al. [23] demonstrated a further increase in maximum ROI at longer periods (about 120 days) of MFC operation under similar operating conditions corroborating the findings of this study. The radius of influence of a particular remediation technology significantly determines its remediation efficiency and cost effectiveness over other technologies in line with environmental considerations and therefore it is pivotal for its selection as a preferred remediation strategy. The knowledge of the ROI could also be very useful in the determining adequate anode electrode size and MFC reactor spacing in large scale field applications. The phenanthrene concentration in the aqueous phase of the soil MFCs was relatively constant across both test MFCs (i.e. MFC+S and MFC-S reactors) over the period of MFC operation but was significantly lower than the control reactors indicating better degradation efficiency ( Figure 4B). The insignificant change in pore water phenanthrene concentration (especially in MFC+S and MFC-S reactors) might be due to the dynamic balance in the phenanthrene partitioning between the soil-phase and the aqueous phase. The dynamic balance in phenanthrene partitioning between the aqueous-soil interface suggests a possible balance between the phenanthrene degradation and desorption rates in the MFC. Figures 2 and 4A clearly demonstrate that phenanthrene degradation near the electrode was significantly enhanced relative to the control reactors. Phenanthrene fractions remaining on all MFC anodes at the end of the test period were less than 10 %, indicating that the majority of the PHE adsorbed by the electrodes were biodegraded by anodic microbial respiration rather than chemical/physical adsorption and that the adsorption process merely enhanced faster biodegradation rates. The removal of phenanthrene was similar to that of TCOD, as shown in Figure 5. There was a negative linear relationship between the phenanthrene removal efficiency and the radial distance from the MFC anodes ( Figure 3). The slope of the ROI gradually becomes more positive relative to time, indicating a steady ROI's expansion with respect to time of reactor operation. In this study, bromate removal in the cathode chamber coupled with phenanthrene degradation was monitored over the test period. Bromate removal in both MFCs was about 80-95 % on average which is significantly higher than the open circuit controls (15-40 %) over the 60day period of MFC operation ( Figure 6). In the open circuit MFCs, the cathode and the anode terminals are physically separated and thus there is no transfer of electrons to the cathode which are needed for the electrochemical reduction of bromate to bromide ions (that are non-toxic). However, the small bromate reduction (15-40 %) observed in the open circuit MFC, as in this study, could be due to possible electron transfer across the permeable membrane from the anode to the cathode. This interstitial electron transfers especially in soil systems accounted for possible reduction or oxidation of pollutants in open circuit MFCs reported previously by Huang et al. [24] and Nielsen et al. [25]. Moreover, this study demonstrated that a tubular MFC configuration can significantly enhance phenanthrene biodegradation up to 293 % of that from the baseline reactor with the enhanced biodegradation from the MFC anodes extending even to an ROI of 8 cm (Figure 2). This results underpins the deplorability of this MFC design in real field practices to boost significantly, the biodegradation of petroleum-polluted soils coupled with bromate removal. This passive remedial technology is environmentally friendly and can significantly reduce clean-up time in a cost effective manner. Voltage generation and electrochemical characterisation of performance of the soil MFC Phenanthrene and bromate removal during tubular MFC operation over the test period was accompanied with concomitant biogenic electricity generation as observed in Figure 7. Current density reached approximately 60 mAm -2 and 53 mAm -2 for MFC+S and MFC-S respectively during MFC operation (across a 1000 Ω resistor). There was a good relationship between electricity generation and phenanthrene microbial degradation during the MFC operation, after the completion of the lag phase. The maximum power density obtained for MFC+S and MFC-S were 4.69 mWm -2 and 4.06 mWm -2 respectively, during the experimental period. These voltage generation outcomes were like the former reports on the retorts of waterlogged-contaminated soils in electricity generation by MFCs (Huang et al. [24], Wang et al. [1], Lu et al. [23]). The gradual increase in current generation may be due to bacterial acclimation and marked rise in the activity of the electrochemically-active microbial population in the soil. Current output during reactor operation was erratic which could be probably due to the production of biotransformed intermediate products resulting from phenanthrene degradation and mass transfer limitations during the long operational periods (Lu et al. [23]; Huang et al. [24]). The CV (cyclic voltammograms) of the anode chamber of the MFC was analyzed during incubation at day 10 and 30 ( Figure 8). The cyclic voltammograms for both MFC+S and MFC-S showed a substantial oxidation/reduction peak potential shift as phenanthrene microbial degradation proceeded in the MFCs from day 10 to 30, indicating a detectable drop in anode/oxidation potential resulting from the increasing microbial electrochemical oxidation processes occurring at the anode. However, there was more slight shift in redox potential in MFC with surfactant amendment (MFC+S) than in MFC with no surfactant as shown in Figure 8. This perhaps, might indicate positive impact of surfactant addition by possibly increasing phenanthrene bioavailability and mobility within the soil matrices or could act as redox electron shuttle for ferrying electrons to the anode. The addition of surfactant to MFCs, as shown in this study, could enhance phenanthrene removal and improve electrochemical performance of MFC. Similarly, Wu et al. [26] reported observed enhanced toluene degradation and power generation in MFCs amended with a surfactant, pycocyanin which also corroborates findings from this study. Findings of this study have demonstrated the potential practical application of this tubular-type soil MFC system for degradation of hydrocarbon contaminated subsurface soil environments coupled with concomitant bioelectricity production. Electrical outputs generated in soil MFCs could be employed in the monitoring contaminant's degradation profile, reduce the frequency of soil samples (in field applications) and electricity generated during the biodegradation can be used as power for remote sensors. Changes in physicochemical characteristics of soil Changes in physicochemical properties of the soils such as pH, electrical conductivity and total dissolved solid (TDS) are one of the key parameters for quantification and validation of hydrocarbon removal driven by microbial action ( Figure 9). The soil pH values for all the MFCs except the controls, decreased up to 0.23 pH units in the first 10 d at a radial distance of 2 cm from the BES's anodes each (relatively lower than those obtained at radial distances 4 and 8 cm, demonstrating a minor proton build-up was noticed near the anode). A rise in electrical conductivity and TDS of about 25-54 % and 17-37 % was closely associated with decrease in pH. A possible explanation for the observed trend might be the adsorption of ions present in the soil matrix and accumulated hydrogen ions very close to the anode. From day 10 to 60, fluctuations in pH, EC and TDS were observed at all radial distances from the anode for active MFCs at each sampling point. The observed fluctuation in physicochemical properties of the soil may possibly be due to the dynamic formation of intermediate readily oxidisable organic acids from phenanthrene metabolism and its subsequent consumption which in turn, resulted into dynamic changes in microbial population distribution and redox potentials at the anode during the test period (Du et al. [27], Allen et al. [28]). Notably, development of ionic species such as intermediate compounds during the biodegradation pathways and dissolution of minerals may lead to an increase of conductivity and TDS in soil, especially near the anodes (Allen et al. [28], Wang et al. [1]). Soil microbial activity declined as EC increased and this might greatly influence other soil chemical activities such as respiration, nitrification advection/adsorption, denitrification, and residue decomposition (Allen et al. [28], Johnsen et al. [29]). Findings from the data analysed above vividly indicated that radial distance from the anode of active MFCs was directly related to increase in phenanthrene removal catalysed by high microbial activity at distances close to the anode. As presented in Table 1, phenanthrene removal is correlated negatively with TDS in the soil (p < 0.05) an electrical conductivity (p < 0.05) but positively correlated with total COD (TCOD). However, from statistical analysis based on the findings in the study, there is no statistically significant correlation between phenanthrene removal, bromate removal and pH, indicating that phenanthrene removal does not necessarily depend on the pH or bromate removal rates. Notably, bromate removal on the other hand is negatively correlated with pH (p < 0.05). Such correlations, associated with phenanthrene degradation and bromate removal, gives a holistic view on potential of MFC systems in improving phenanthrene removal coupled with bioelectricity generation and supports the restoration of the contaminated soil to its natural ecological status. Toxicity determinations in contaminated soil after MFC treatment The disposal of phenanthrene-contaminated soil can be performed only when the pollutant levels and other toxic organic intermediate products are within permissible concentration levels set by relevant regulatory agencies. This ensures it is environmentally safe and pose no immediate danger to human health and ecosystems; the ultimate goal of any successful remediation process (Liu et al. [30]; Melo et al. [31], Ayed et al. [32]). Microbial pollutants degradation usually leads to partial mineralization, thus the formation of degradation products with unknown chemical and toxicological characteristics which sometimes may even be more toxic than the parent pollutant. The percentage relative inhibition of the growth of bioluminescent marine bacteria, V. fischeri, in soil extract taken at the start and end of MFC operational period is shown in Figure 10. Bioluminescence based acute toxicity assays conducted using V. fischeri indicated a significant (p < 0.01, t-test) decrease in toxicological level by 65 % and 35 % in MFC amended with surfactant (MFC+S) and MFC with no surfactant (MFC-S) respectively compared to baseline controls. From Figure 10, the MFC reactors and baseline controls after 60 days of incubation were generally less toxic than at the start of treatment. A Eco-toxicity testing is one of the remediation techniques employed in the assessment of the ecological profile of treated sites and may inform decisions for on-site treatments towards a successful reclamation of the contaminated site (Hankard et al. [35], Vogt et al. [36], Sarkar et al. [3]). Therefore, this study has demonstrated the detoxification capability of MFC system over natural attenuation (i.e. a do-nothing scenario) in the treatment of phenanthrenecontaminated soil in a timely and effective manner under the same environmental condition. Conclusion In this study, a tubular MFC system performance in phenanthrene-contaminated soil was investigated. This MFC system significantly enhanced the biodegradation efficiency of phenanthrene (86 %) in the soil within a ROI up to 8 cm compared to non-MFCs control with a projected maximum ROI up to 40 cm. The findings of this study established for the first time, the simultaneous removal of phenanthrene and bromate (95%) coupled with concomitant bioelectricity generation using MFC systems. MFC technology may be used for in situ decontamination of soils due to its potential detoxification capacity and could be deployed directly as a prototype-MFC design in field applications or integrated with existing infrastructure. Electricity generated could be used to power wireless sensors for remote site monitoring and as an indicator for real-time contaminant degradation profiling thus greatly reducing the cost of frequent soil samples analysis for pollutant degradation monitoring as usually demanded while using other nonbioelectrochemical, conventional remediation technologies. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version.
7,337.4
2021-05-04T00:00:00.000
[ "Engineering" ]
Chapter 20 : Applying Recommender Systems for Learning Analytics : A Tutorial main methods used in recommender systems; they recommend an item to the user by comparing the representation of the item’s content with the user’s like-minded users and introduce them as so-called nearest neighbours to some target user; then they predict an item’s rating for that user on the basis of the ratings given to this item by the target users’ nearest neighbours (co-ratings) (Herlocker, Konstan, Terveen, With the emergence of massive amounts of data in various domains, recommender systems have become a practical approach to provide users with the most suitable information based on their past behaviour and and turn the abundance from a problem into an asset such as educational data mining, big data, and Web data.For instance, data mining approaches can make recommendations based on similarity patterns detected from the collected data of users.Furthermore, as an important part of LA research. Recommender systems can be differentiated according to their underlying technology and algorithms.Roughly, they are either content-based or use collaborative main methods used in recommender systems; they recommend an item to the user by comparing the representation of the item's content with the user's like-minded users and introduce them as so-called nearest neighbours to some target user; then they predict an item's rating for that user on the basis of the ratings given to this item by the target users' nearest neighbours (co-ratings) (Herlocker, Konstan, Terveen, 2012;Schafer, Frankowski, Herlocker, & Sen, 2007). In the past, we have applied recommender systems in various educational projects with different objectives regarding the development and evaluation of recommender system algorithms in education; we especially As described by the RecSysTEL working group for Recommender Systems in Technology-Enhanced 2015) it is important to apply a standard evaluation method.The working group identified a research methodology consisting of four critical steps for eval-uating a recommender system in education: 1.A selection of dataset(s) that suit the recommendation task.For instance, the recommendation items for a user. 2. An of different algorithms on the selected datasets including well-known datasets (if possible, education-oriented datasets such as MovieLens makes movie recommendations) to provide insights into the performance of the recommender systems. 3. A comprehensive user study to test psycho-educational effects on learners as well as on the technical aspects of the designed recommender system. 4. A deployment of the recommender system in a real life application, where it can be tested under realistic, normal operational conditions with actual users. The above four steps should be accompanied by a complete description of the recommender system reported in the special section on educational datasets 1 and made available for other researchers under certain conditions allow other researchers to repeat and adjust any part of the research to gain comparable results and new insights and thus build up a body of knowledge around recommender systems in learning analytics. imental study that followed the research methodology described above for recommender systems in of a recommender system study that followed the then, we conclude. In this section, we describe how one should evaluate a recommender system in learning, making use of an methodology described above.To this methodology, however, we added an additional step: that of devel-2013)., which is presented in a RecSysTEL special issue 1 (Manouselis et al., 2012). In our study, our target environment is social learning platforms in general.Social learning platforms work similarly to social networks such as Facebook but, the purpose of learning and knowledge sharing.They for educational stakeholders such as teachers, students, learners, policy makers, and so on.Our target social 2 As The interface has been designed with students, teachers, parents and policy makers in mind. it will empower stakeholders through a single, integrated access point for eLearning resources from dispersed educational repositories.Secondly, it engages stakeholders in the production of meaningful educational activities by using a social-network style multilingual portal, offering eLearning resources as well as services for the production of educational activities.Thirdly, it will assess the impact of the new educational activities, which could serve as a prototype to be adopted by stakeholders in school education. ommender system can best suit the data and information needs of a social learning platform, the main for users.In the following sub-sections, we describe the study step by step. Dataset Selection type of data.In our case, the target social learning We chose the MACE and OpenScout datasets for the following reasons: 1.The datasets provide social data of users (ratings, tags, reviews, et cetera) on learning resources.So, the structure, content, and target users of the 2. Running recommender algorithms on these datasets helps us to evaluate their performance before 3.Both the MACE and OpenScout datasets comply -2 http:/ /opendiscoveryspace.eu A RECOMMENDER SYSTEM EXPERIMENT IN THE EDUCATIONAL DOMAIN for storing social data. Besides these two datasets, we also tested the Mov-ieLens dataset as a reference since, up until now, the educational domain has been lacking reference datasets for study, unlike the ACM RecSys conference series, which deals with recommender systems in general.Table 20.1 provides an overview of all three datasets sparsity.All the data are described more fully our Off line Data Study Algorithms.In this second step, we tried to select algorithms that would work well with our data.First, it is important to check the input data to be fed into data, thus the data of the selected datasets, includes interaction data of users with learning resources (items).2. We ran the model-based CFs, including statesample data. 3. algorithms from steps 1 and 2. In addition to the baselines, we evaluated a graph-based approach neighbours using the conventional k-nearest Performance Evaluation.After choosing suitable datasets and recommender algorithms, we arrive at the task of evaluating the performance of candidate protocol (Herlocker et al., 2004).A good description of an evaluation protocol should address the following questions: Q1.What is going to be measured? we measure the prediction accuracy of the recommendations generated.By this, we want to measure how much the rating predictions differ from the actual ones by comparing a training set and a test set.The training and test sets result from splitting our user ratings data (the same as user interaction data).In our metrics range from 1 to 5. If the input data contains implicit user preferences, such as views, bookmarks, downloads, et cetera, we of the F1 score since it combines precision and recall, which are both important metrics in evaluating the accuracy and coverage of the recommendations generated (Herlocker et al., 2004).F1 ranges from 0 to 1. ommendations on which a metric is measured, also known as a cut-off the F1 for the top 10 recommendations of the result set for each user.shows the values of F1.As Figure 20.2 shows, the graph-based approach performs best for MACE (8%) and MovieLens (24%) and the selected memory-based and model-based CFs come in second and third place right after the graph-based CF.For OpenScout, the memory-based approach performs better with a difference of almost 1%. In conclusion, according to the results presented in Figure 20.2, the graph-based approach seems to perby an improved F1, which is an effective combination of precision and recall of the recommendation made. Deployment of the Recommender System and User Study In the educational domain, the importance of user et al., 2015).Since the main aim of recommender systems in education goes beyond accurate predictions, usefulness, novelty, and diversity of the recommendations.However, the majority of recommender system probably because user studies are time consuming and complicated. by conducting a user study with our target platform.For this, we integrated the algorithms that performed made for them.For this we used a short questionnaire versity, and serendipity.The full description and results of this data study and the follow-up user study have not been published yet.The user study does not conrun user studies that can go beyond the success indictors of data studies, such as prediction accuracy.Accuracy is one of the important metrics in evaluating recommender systems but relying solely on this metric can lead data scientists and educational technologists down less effective pathways. Accessing most educational datasets is challenging since they are not publicly and openly available, for the same datasets, and some of the algorithms used differ from their results.Therefore, we could not gain additional information from the comparisons One possible reason is that the studies use different versions of the same dataset because the collected data belongs to different periods of time.For the MACE dataset, for instance, different versions are available.system community.This problem originates from the fact that, unfortunately, there is no gold-standard dataset in the educational domain comparable to the MovieLens dataset3 in the e-commerce world.In fact, the LA community is in need of several representative datasets that can be used as approaches.The main aim is to achieve a standard data format to run LA research.This idea was initially and later followed up by the SoLAR Foundation for this lack of comparable results and the pressing need for a research cycle that uses data repositories to project called LinkedUp4 follows a promising approach towards providing a set of gold-standard datasets by ers-Lee, 2009).The LinkedUp project aims to provide a linked data pool for learning analytics research and to run several data competitions through the central data pool. Overall, the outcomes of different recommender sysdomain are still hardly comparable due to the diversity of algorithms, learner models, datasets, and evaluation The main goal of this chapter has been to illustrate how to identify the most appropriate recommender system for a learning environment.To do so, we folevaluating recommender systems in learning.The methodology consists of four main steps: 1. Select suitable datasets preferably from the educational domain and, in case the actual data is not available yet, similarly to the target data. 2. Run a set of candidate recommender algorithms step should reveal which recommender algorithms best works with the input data. 3. Conduct a user study to measure user satisfaction on the recommendations made for them. target learning platform. importance of running user studies even though they are quite time consuming and complicated. not represent the opinions of the European Union, and the European Union is not responsible for any use that might be made of its content.The work of EU project LACE. PRACTICAL IMPLICATIONS AND LIMITATIONS CONCLUSION Therefore, we chose to use the Collaborative Filtering (CF) family of recommender systems.CF algorithms rely on the interaction data of users, such as ratings, bookmarks, views, likes, et cetera, rather than on the content data used by content-based recommenders.CF recommenders can be either memory-based or either item-based or user-based, referring to the our study, we made use of all types and techniques: both memory-based and model-based, as well as both user-based and item-based.Figure 20.1 shows our 1.We compared performance of memory-based CFs, including both user-based and item-based, by employing different similarity functions. Finally Figure 20.2 shows the F1 results of best performing Figure 20 . 2 . Figure 20.2.F1 of the graph-based CF and the best performing baseline memory-based and model-based CFs
2,534
2017-01-01T00:00:00.000
[ "Computer Science" ]
The Synthesis and in vitro Study of 9-fluorenylmethoxycarbonyl Protected Non-Protein Amino Acids Antimicrobial Activity . Introduction Despite the fact that amino acids, amino acid derivatives, and peptides have been studied in various fields of chemistry and medicine for decades, interest in peptides remains topical today.Peptides are pharmacologically active compounds used in the treatment of various diseases from diabetes to tumors [1][2][3]. Nowadays, among amino acid derivatives, protected amino acids are being successfully studied, as the introduction of a protective group can strongly change the properties of the amino acid.One such protecting group is the 9-fluorenylmethoxycarbonyl group, which is considered the best protecting group in the peptide synthesis as it can be introduced into the amino acid structure with high yield and can be selectively removed at the end when obtaining free peptide.Moreover, it has anti-inflammatory, and antimicrobial properties, according to a number of publications in the last decade.For example, 9-fluorenylmethoxycarbonyl-phenylalanyl-phenylalanine dipeptide possesses antimicrobial activity against Gram-positive and Gram-negative bacteria. Chemistry Materials All reagents were obtained from commercial sources and used without further purification.Thin layer chromatography (TLC) was carried out on Merck aluminium foil backed sheets pre-coated with 0.2 mm Kielselgel 60 F254.Melting points (mp) were determined by "Elektrothermal". 1 H and 13 C NMR spectra were recorded on Varian Mercury 300 MHz spectrometer using TMS as an internal standard.Elemental analysis was done by Euro EA3000. The synthesis of 9-fluorenylmethoxycarbonyl-(S)-β-(N-imidazolyl)-α-alanine: The mixture of g (0.0043 mol) of (S)-β-(N-imidazolyl)-α-alanine (2), 0.456 g (0.0043 mol) of Na 2 CO 3 was added in a round-bottomed flask.Then, the mixture was stirred at room temperature with a magnetic stirrer until a clear solution was formed, after which 1.955 g (0.0058 mol) of 9-fluorenylmethoxycarbonyl-N-oxysuccinimide (1) ester dissolved in 2 ml of 1,4-dioxane was added to the reaction mixture.The reaction mixture was stirred at room temperature for 3 h.The reaction was monitored by the TLC method [SiO 2 , CHCl 3 /ethyl acetate/MeOH (4:2:1); developer: chlorotoluidine].To remove the unreacted starting material, first, the reaction mixture was extracted twice with diethyl ether, then 20 ml of distilled ethyl acetate was added to the reaction mixture and acidified with 2 N hydrochloric acid to pH 2, and finally, 10 ml of ethyl acetate was added and extracted two times. Then the organic fractions were combined and dried over anhydrous sodium sulfate.After decanting, the organic solvents were removed by vacuum evaporation at 50-60 °C.The target product was recrystallized from ethyl acetate-hexane 1:3, filtered and dried under vacuum conditions (50-60 °C).As a result, a white crystalline mass was obtained.The syntheses of 9-fluorenylmethoxycarbonyl-(S)-α-methylphenylalanine, 9-fluorenylmethoxycarbonyl-(S)-α-allylglycine, 9-fluorenylmethoxycarbonyl-(S)-α-propargyl-glycine, acids were carried out by the above mentioned method, the physicochemical parameters of the synthesized amino acid derivatives were investigated and compared with the literature data. Test culture: To determine antimicrobial properties of protected non-protein amino acid, the conditionally pathogenic Gram-negative Salmonella tуphimurium and Gram-positive Bacillus subtilis G17-89 from the Microbial Culture Collection of the Microbial Depository Center (MDC) at the SPC "Armbiotechnology" NAS RA, were used.Test cultures were grown on solid Nutrient agar (Himedia, India) at рН 7.2 for 16-18 h and at 37 °С.Then cells were harvested and suspended in the Nutrient broth at a concentration of approximately 2.2x10 6 CFU/ml. Detection of antimicrobial activity: The spot-onlawn method on the test culture pre-plated in the solid medium was applied.For the spot-on-lawn method, suspension of the overnight test culture in the Nutrient broth (containing 10 6 CFU/ml) speeded across the surface and aliquots of investigated samples of 20, 40, and 60 µg were applied above the test culture using a micropipette.Plates remained at 4 °С for 1-2 h to promote diffusion of samples.Then plates were incubated in temperature-controlled conditions at 30 °С during 24-48 h.Antimicrobial activity was assessed by measuring the size of the inhibition zone (diameter) of test culture growth (Ø, mm) after 24 h incubation in a thermostat at 30 °C [8]. Determination of resistance to antibiotics: To determine the resistance of test cultures to antibiotics, the method with antibiotic disks was applied.Each strain was inoculated into appropriate broth, and incubated at 37 °C for 16 h.By spread plate technique the cultures were inoculated in the plates using sterile swabs.The antibiotic disks were placed on the plates.Agar plates with antibiotic disks were then incubated at 37 °C for 24 h.The diameters of the inhibition zones were measured.The results were expressed as sensitive (+) and resistant (-) [8-10]. Results and discussion Developing a method of synthesis of potentially biologically active new non-protein protected 9-fluorenylmethoxycarbonyl-(S)-β-(N-imidazolyl)-α-alanine amino acids, as well as synthesizing other wellknown protected amino acids in the specified way.The schematic diagram of the reaction is given in Fig. 1. A non-protein amino acid was selected considering several important properties of heterocyclic amino acids. Amino acids and peptides combined with heterocycles represent an important class of therapeutic agents.Biologically active heterocycles are combined with amino acids or peptides to increase drug stability.Besides, drugs based on amino acids and peptides generally have low toxicity, high bioavailability, and permeability, as well as good metabolic and pharmacokinetic properties.Synthetic heterocyclic substituted amino acids and peptides synthesized on their bases are a promising choice shortly for the development of new, less toxic, and safe drugs, which exhibit multiple properties [3,[11][12]. Nitrogen-containing heterocyclic compounds have become the subject of great interest due to their wide application.Studies have shown their positive activity as anti-inflammatory, antioxidant, anti-tumor, anti-ulcer, antidepressant, anti-malarial, anti-tuberculosis, antiviral, anti-hypertensive, anti-diabetic and cholinesterase inhibitory agents [13][14]. By combining two potential antimicrobial compounds, we synthesized new, not described in literature 9-fluorenylmethoxycarbonyl-(S)-β-(N-imidazolyl)-α-alanine protected (3) compounds with potential antimicrobial activity.9-Fluorenyl-methoxycarbonyl-(S)-α-methylphenylalanine (4), 9-fluorenylmethoxy-carbonyl-(S)α-allylglycine (5), 9-fluorenylmethoxycarbonyl-(S)-α-propargylglycine (6), 9-fluorenylmethoxycarbonyl-(S)-leucine (7) were also selected and synthesized by the same method (Fig. 2, Table 2).In the next stage of the research, the antimicrobial activity of the synthesized compounds was carried out along with the evaluation of the resistance of the given test cultures' to a number of antibiotics.For the investigation of test cultures sensitivity to antibiotics, the disk diffusion method was used.It is one of the oldest approaches to antimicrobial susceptibility testing and remains one of the most widely used antimicrobial susceptibility testing standardized methods.A schematic diagram of the study of antibiotic susceptibility testing is shown in Fig. 3. As seen from the diagram, antibiotics can be classified as bactericidal, then after using of antibiotics total number of viable bacteria significantly reduced around the disk, and zone of inhibition is visible and more than 20 mm (C), bacteriostatic, then after using of antibiotics total number or viable bacteria also reduced around the disk, but visible zone of inhibition about 15-20 mm (B), more than 20 mm, the replication of bacteria are arrests, thus limited the spread of bacterial growth (c).Then antibiotics are ineffective, the bacterial growth is observed and bacterial concentration does not change (A). The sensitivity of test cultures to the most commonly prescribed antibiotics was performed.The data are given in Table 3. As can be seen from the presented data, antibiotics have different effects on test culture growth.Bacillus subtilis 17-89 was more sensitive to examined antibiotics.Growth inhibition zones occur about 20-30 mm.While Salmonella tуphimurium G-38 showed high resistance to almost all investigated antibiotics. The investigation of the antimicrobial activity of the synthesized compounds was carried out.The results are presented in Table 4. As can be seen from the results of the research, amino acids suppress the growth of the mentioned Growth inhibition zones occur about 12-18 mm.The obtained results were compared with antibiotic disk susceptibility testing data.As was shown during in vitro investigation, the samples ( 3) and (4) own bactericidal activity.The amino acids 9-fluorenylmethoxycarbonyl-(S)-β-(N-imidazolyl)-α-alanine (3) inhibited the growth of Gram-negative Salmonella tуphimurium G-38 and 9-fluorenyl-methoxycarbonyl-(S)-αmethyl-phenylalanine (4) inhibited the growth Gram-positive Bacillus subtilis 17-89 bacteria better than some antibiotics (see Table 4). Fig. 3 . Fig. 3. Schematic diagram of the research on the example of antibiotics. Table 1 . Chromatography conditions for the chemical purity of amino acids Table 3 . Determination of resistance of Salmonella tуphimurium G-38 and Bacillus subtilis G17-89 test cultures to antibiotics Table 4 . Antimicrobial effect of the protected amino acids on the growth of Salmonella tуphimurium G 38 and Bacillus subtilis G17-89
1,877.4
2024-02-15T00:00:00.000
[ "Chemistry", "Medicine" ]
Genome-wide analysis of TCP gene family in Osmanthus fragrans reveals a class I gene OfTCP13 modulate leaf morphology Osmanthus fragrans is a woody perennial that is cultivated in most Asia areas and widely employed for gardening and landscaping purposes. In addition to its distinctive fragrance, there has been a growing interest in the variation of O. fragrans leaf shapes. However, there are limited reports regarding the role of TCP genes regulating leaf morphology in O. fragrans . In this study, a total of 39 TCP members were identified at the genome level, and the sequence characteristics and tissue expression of class I TCP genes were analyzed. The highest expression gene OfTCP13 cloned from O. fragrans cultivar 'Yanhonggui' and transferred into tobacco plants. Moreover, the OfTCP13 protein was found to be localized in the nucleus and lacked transcriptional activation activity. Compared with wild-type (WT) plants, overexpression of OfTCP13 significantly influenced the leaf morphology of tobacco plants, resulting in significantly greater leaf thickness and blade length-to-width ratio than WT plants. Furthermore, the cross-section of the transgenic tobacco plants exhibited mesophyll cell number increased relative to WT plants, suggesting that OfTCP13 may regulate leaf morphology through promoting mesophyll cell number. Together, our results can provide a valuable insight into the improvement of diversity of leaf morphology in O. fragrans . Citation: Zheng Z, Xu Q, Tang J, Chen P, Hu Z, et al. 2023. Genome-wide analysis of TCP gene family in Osmanthus fragrans reveals a class I gene OfTCP13 modulate leaf morphology. Ornamental Plant Research 3:15 https://doi.org/10.48130/OPR-2023-0015 Introduction Plant leaf is a critical organ as it plays a pivotal role in both photosynthesis and respiration. Leaf morphology is a vital index in plant morphogenesis, where the form and size of leaves determine the plant yield and shape. However, leaf morphology is influenced by various factors such as variety, age, and environment [1,2] . By regulating leaf morphology, it is possible to optimize the plant's light energy utilization, improve its quality and yield, and alter gardening plant shape and ornamental value. Teosinte branched1/Cycloidea/Proliferating cell factor (TCP) transcription factor family is a distinct group of proteins found in plants, characterized by a conserved structural domain consisting of atypical basic helix-loop-helix (bHLH) motifs spanning approximately 59 amino acids [3] . The TCP domain was initially identified in four genes from three plant species: Teosinte branched1 (TB1) in maize (Zea mays), which plays a crucial role in apical dominance, inflorescence development, and other biological processes [4] ; Cycloidea (CYC) in snapdragons (Antirrhinum majus), which controls flower morphology [5] ; and PROLIFERATING CELL FACTORS 1/2 (PCF1/2) in rice (Oryza sativa), involved in cell proliferation and growth of meristematic tissues and lateral organs [6] . TCP gene family is generally classified into two subfamilies: class I, also known as the PCF subfamily, and class II, which further branches into the CIN and CYC/TB1 subfamilies [7,8] . TCP genes are widely distributed across various plant species and play crucial roles in multiple growth and development processes. For instance, they are involved in fruit development [9] , shoot branching [10] , flower morphological regulation [11] , leaf morphological regulation [12] , hormone signaling [13] and abiotic stress response [14] . A large number of TCP genes have been identified in many plants by whole genome sequencing. In total, 23,22,46, and 36 members of TCP transcription factor family have been identified in Arabidopsis thaliana, rice, potato (Solanum tuberosum L.), and maize, respectively [15−18] . These TCP transcription factors play crucial roles in regulating leaf development and morphology in various species. In Arabidopsis, class I TCP genes (AtTCP7, AtTCP8, AtTCP22, and AtTCP23) exhibit mutually interactions and regulate the expression of KNOTTED1-LIKE homeobox (KNOX) genes, thus exerting critical influence over leaf development and cell proliferation [19] . Notably, the generation of a pentuple mutant of these genes leads to significant alterations in leaf development [19] . Moreover, AtTCP14 and AtTCP15 interact with SPINDLY (SPY) to modulate cytokinin (CK) biosynthesis or degradation in leaves, thereby influencing cell proliferation during leaf development [20] . Mutations in AtTCP14 and AtTCP15 genes give rise to phenotypes characterized by wider leaf bases, shorter petioles, and upwardly curved leaf edges [21] . Additionally, class II TCP genes also demonstrate involvement in regulating the cell proliferation and differentiation during ARTICLE leaf development. AtTCP1 plays a significant role in leaf shape regulation by participating in strigolactone signaling [13] . AtTCP4, on the other hand, inhibits the growth of leaf surface trichomes by activating GLABROUS INFLORESCENCE STEMS (GIS) [22] , and AtTCP5 contributes to the regulation of leaf margin morphology by activating BEL-like transcription factors [23] . Similar observations of TCP gene involvement in leaf morphology regulation have been reported in other ornamental plants. For instance, mutations in the CIN gene in snapdragons lead to excessive proliferation of leaf marginal cells, resulting in leaf margin undulation [24] . Similarly, mutations in the LANCEOLATE (LA) gene, the tomato (Solanum lycopersicum) ortholog of Arabidopsis TCP4, result in the transformation of compound leaves into small, simple leaves [25] . Osmanthus fragrans, a plant species of significant economic and ornamental value, has been cultivated in China for centuries and holds a distinguished reputation as one of the ten most famous traditional flowers. Besides its sweet fragrance, O. fragrans exhibits an aesthetically pleasing tree shape, making it an attractive choice for landscaping purposes. Previous studies have indicated that the critical role of TCP transcription factors in the regulation of plant leaf development and morphology in most species [19] . However, there is limited research regarding the role of TCP transcription factors in the ornamental plant O. fragrans, and their involvement in the regulation of leaf morphology in this plant remains unclear. In this study, we conducted a comprehensive analysis of the TCP family at the whole genome level in O. fragrans, and examined the expression patterns of class I TCP genes in different tissues of O. fragrans. One class I TCP gene, named OfTCP13, was obtained, showing higher expression in the leaf tissue. To further explore its function, we validated the gene's role in tobacco lines. The outcomes of our investigation offer valuable insights into the understanding of leaf morphogenesis in O. fragrans. Plant material and growth conditions The 6-7-year-old O. fragrans cultivar 'Yanhonggui' was selected and cultivated in the O. fragrans Resource Garden of Zhejiang University of Agriculture and Forestry, China (latitude: N30°15′14′′ N, longitude: 119°43 ′39′ E). Tissue samples, including roots, stems, mature leaves, leaf buds, were collected from different parts of O. fragrans during the vegetative stage, while the flower tissues were harvested in the full flowering stage. Approximately 0.5 g of each tissue sample was harvested and immediately frozen in liquid nitrogen. Then the frozen samples were subsequently stored at −80 °C until further analysis. Three independent biological replicates were included for each tissue sample. Genome-wide identification of TCP gene family To identify and retrieve all the OfTCP protein sequences in the O. fragrans genome, the TCP conserved domain (PF03634, http://pfam.xfam.org/) was utilized and the online tool HMMER v3.3.2 (http://hmmer.org/) employed for identification with a default e-value. Subsequently, the presence of the TCPconserved domain in all proteins was re-evaluated using the SMART (http://smart.embl-heidelberg.de/) and Batch Web CD-Search of NCBI (www.ncbi.nlm.nih.gov/cdd/?). For the gene name, 12 OfTCP genes, including OfTCP1 to OfTCP5, and OfTCP7 to OfTCP13, were identified and named by Zhou et al. [26] , and these names have been adopted and maintained in our study. As for the remaining OfTCP genes, they have been assigned names based on their corresponding gene IDs in a sequential order. All the OfTCP amino acid sequences were provided in the Supplemental Table S1. In addition, the molecular weight (MW) and the isoelectric point (PI) of all OfTCP proteins were obtained using the Online program ExPASY (www.expasy.org/tools/). Chromosomal localization was analyzed by TBtools software [27] . The construction of the phylogenetic tree was performed using MEGA 11.0 software, employing the neighbor-joining method with a bootstrap value of 1,000, and iTOL.v6 software (https://itol.embl.de/) was utilized to visualize the phylogenetic tree. TCP protein sequences for Arabidopsis were downloaded from TAIR (www.arabidopsis.org/). Sequence characterization analysis of class I TCP genes Multiple sequence alignment of the class I OfTCP genes was performed using DNAMAN software (www.lynnon.com/dna man.html). To identify motifs of class I TCP proteins, the MEME online program (https://meme-suite.org/meme/tools/meme) was utilized, with a maximum of ten output motifs selected. The NDA sequence structure of class I OfTCP genes was determined by Gene Structure Display Sever 2.0 (http://gsds.cbi. pku.edu.cn/), which provided visual representation of the exonintron organization. Cis-element analysis was conducted on the 2,000 bp upstream sequences of the class I OfTCP genes using Plantcare database (https://bioinformatics.psb.ugent.be/). The full-length amino acid sequences of class I TCP were aligned using MEGA 11.0 software with default parameters, and neighbor-joining phylogenetic trees were constructed with 1,000 bootstrap replicates. RNA extraction and quantitative real-time PCR A total of 0.5 g samples was used for RNA extraction from different tissues of O. fragrans using the RNAprep Pure Plant Plus Kit (Tiangen, China) according to the manufacturer's instructions. Then the extracted RNA samples were treated with RNase-free DNase I (TaKaRa), and their concentration was determined using a Nanodrop 2000 spectrophotometer (Thermo Fisher, USA). The OD 260/280 of the RNAs ranged from 1.8 to 2.2, and the OD 260/230 was > 1.8. First-strand cDNA was synthesized using HiScript III All-in-one RT SuperMix Perfect for qPCR (Vazyme, China). The expression levels of class I TCP genes in different tissues of O. fragrans were analyzed by quantitative real-time PCR (qRT-PCR). The reaction system was SYBR Premix Ex Taq (Vazyme, China) 10 µL, forward and reverse primers (10 mM) 0.4 µL, cDNA 2 µL, double distilled H 2 O (ddH 2 O) 6.2 µL, reaction program of 95 °C for 30 s, (95 °C for 5 s, 60 °C for 30 s; 95 °C for 5 s) 40 cycles, then 60 °C for 1 min. OfACT and NtACTIN was selected as the reference gene for O. fragrans and Nicotiana tabacum, respectively [28,29] . The relative gene expression was calculated using the 2 -ΔΔCᴛ method [30] . All primer sequences are listed in Supplemental Table S2. Vector construction The full-length cDNA of OfTCP13 was obtained by PCR. The PCR product was purified by a FastPure Gel DNA Extraction Mini Kit (Vazyme, China), ligated to the pMD-18T plasmid (TaKaRa, China) and then sequenced. After PCR amplification using primers with NheI and XhoI restriction sites, the complete coding sequence of OfTCP13 without the stop codon was ligated into the pORE-R4-35AA vector [31] to create a recombinant plasmid (35S::OfTCP13-GFP) for subcellular targeting and transgenic tobacco. After PCR amplification using primers with BamHI and EcoRI restriction sites, the complete coding sequence of OfTCP13 was ligated into the pGBKT7 vector to create a recombinant plasmid (pGBKT7-OfTCP13) for transcriptional activation analysis in yeast cells. The reaction systems were all 2 µL of linearized vector, 4 µL of gene fragment and 4 µL of enzyme, and the procedure used was 50 °C for 10 min and final storage at 4 °C. All primer sequences are listed in Supplemental Table S2. Subcellular localization The subcellular localization was performed according to previously described procedures [32] , with little modifications. The plasmid 35S::OfTCP13-GFP was transformed into Agrobacterium rhizogenes strain GV3101. The transformed strain was mixed well with the nuclear marker 35S::D53-RFP (OD600 = 0.6 per strain, 1:1 ratio). Then the mixture was protected from light and incubated for 3 h before being injected into the leaves of N. benthamiana. As a negative control, a null plasmid (35S::GFP) was also injected . After one day of dark incubation and two days of light incubation, GFP signals were detected using confocal microscopy (Olympus Corporation, Japan). Transcriptional activation For yeast cell transcriptional activation analysis, the recombinant plasmid pGBKT7-OfTCP13 was transformed into yeast strain AH109 using the lithium acetate method and coated on selective medium plates without tryptophan (SD/-Trp). pGBKT7-OfWRKY57 was used as a positive control, and empty vector pGBKT7 was used as a negative control. Three days later, the colonies were verified by PCR and the positive bacteria were inoculated onto selective medium plates of SD/-Trp, SD/-Trp-Leu-Ade and SD/-Trp-Leu-Ade+X-α-gal. After 3 d, the yeast cells were photographed and observed for growth. Leaf phenotype investigation Three-month-old tobacco lines were selected for phenotypic observation. The lead length and width were measured by vernier scale with nine biological replicates. In order to compare the leaf thickness between transgenic tobacco and WT, the structure of leaves in cross-sections was observed using paraffin sections [33] . The method is as follows, samples were firstly fixed in FAA fixative (ethanol: acetic acid = 3:1), dehydrated in an ethanol series and embedded in paraffin. Samples were cut into 10 µm sections, dewaxed and resin-sealed, and observed using an Axio Imager A2 positive fluorescence microscope (Carl Zeiss, Germany). TCP transcription factors identification in O. fragrans A total of 39 TCP genes were identified according to the conserved bHLH domain from O. fragrans genome, and the characteristics of OfTCP amino acid sequences were presented in Supplemental Table S3. The length of 39 OfTCP proteins varied from 203 amino acids (OfTCP19) to 587 amino acids (OfTCP35), molecular weight (MW) ranged from 21.94 kDa (OfTCP19) to 65.57 kDa (OfTCP35), and theoretical isoelectric point (pI) varied from 5.88 (OfTCP7) to 10.16 (OfTCP19). According to phylogenetic tree analysis with Arabidopsis and TCP domain characteristics, OfTCP TFs were clustered into three main groups, and 16, 13, and 10 OfTCPs were contained in the PCF, CIN, and CYC/TB1 group, respectively (Fig. 1). In addition, OfTCP genes distributed on 18 chromosomes except chr8, chr11, chr14, chr17, and chr22 by chromosomal location analysis in O. fragrans (Supplemental Fig. S1). Class I OfTCPs sequence analysis For the sequence characteristics of class I OfTCP proteins analysis, we performed multiple sequence alignment and identified the conserved TCP domain, which consists of four regions: the basic region, helix I, loop, and helix II (Fig. 2a). Analysis of motif composition revealed distinct differences among the class I OfTCP proteins. While motif 1 and motif 2 were present in the all OfTCP proteins, the distribution of the other motifs varied (Fig. 2b & Supplemental Fig. S2). Examination of the gene structures of class I OfTCP genes showed that 14 OfTCPs had no introns, OfTCP5 contained one intron, and OfTCP29 harbored two introns (Fig. 2c). These results suggest that variations in the motif composition among class I OfTCP proteins may be associated with different functional roles. Furthermore, analysis of the promoter sequences of class I OfTCP genes identified four distinct cis-acting elements within the 2,000 bp upstream regions. These elements were found to be involved in hormone response, stress response, light response, and growth and developmental regulation (Supplemental Fig. S3). Among these elements, light-responsive elements accounted for the highest proportion (45%), followed by hormone-responsive elements (34%), stress-responsive elements (18%) and growth and developmental regulatory elements (3%). Therefore, the results implied the expression of class I OfTCP genes is regulated by multiple factors and is involved in various biological processes in O. fragrans. Expression analysis of class I OfTCPs in different tissues Previous studies have demonstrated that class II TCP genes play an important role in leaf development [34] . To investigate which class I OfTCP gene is prominently expressed in leaf tissue and involved in the leaf development, we examined the expression levels of the class I OfTCP genes in various tissues (root, stem, leaf, flower, and shoot) of O. fragrans using qRT-PCR. Our results indicated that these class I OfTCP genes are expressed in all tested tissues and could be categorized into four groups based on their expression patterns (Fig. 3). Notably, we found higher expression levels of OfTCP5, OfTCP13, OfTCP22 and OfTCP37 in leaf tissue compared to other tissues, indicating their potential critical roles in regulating of leaf development in O. fragrans. Among these genes, OfTCP13 displayed more than 8.0-fold increase in expression relative to the root tissue, drawing our attention for further investigation in subsequent experiments. Analysis of OfTCP13 subcellular localization and transcription activation activity To investigate the subcellular localization of OfTCP13, the 35S::OfTCP13-GFP and 35S::D53-RFP plasmids were co-transformed into N. benthamiana leaves (Fig. 4a). The 35S::D53-RFP plasmid served as a positive control for nuclear localization, which expressed a red fluorescent protein specifically targeted to the nucleus. Fluorescence imaging analysis revealed that the OfTCP13-GFP fusion protein was exclusively localized in the nucleus during transient expression. In contrast, the control GFP was evenly distributed throughout the cells. These results suggest that OfTCP13 is a nuclear-localized protein. For the transcription activation activity analysis, the full-length coding sequence (CDS) of OfTCP13 was amplified and fused into the pGBKT7 plasmid. Subsequently, we transformed this construct into the yeast strain AH109. The results demonstrated that the positive control transformed yeast cells turned blue in the presence of X-α-gal, indicating successful transcriptional activation. However, the pGBKT7-OfTCP13 transformed yeast cells, as well as the negative control transformed yeast cells, did not exhibit blue coloration in the presence of Xα-gal (Fig. 4b). These observations suggest that OfTCP13 lacks transcriptional activation activity. OfTCP13 involved in the regulation of leaf shape A 35S::OfTCP13-GFP plasmid was constructed and introduced into tobacco plants via leaf disc transformation (Fig. 5a). The resulting transgenic lines were initially screened by PCR to confirm the presence of the transgene, and a three-month-old transgenic plants E1 were employed for subsequent phenotypic investigation (Fig. 5b, c). Distinct differences in leaf A t T C P 4 A t T C P 3 A tT C P 2 A tT C P 24 A tT C P 1 7 A tT C P 5 A t T C P 1 3 A t T C P 7 A t T C P 2 1 OfTCP13 modulate leaf morphology phenotypic traits were observed between the E1 transgenic line and the wild type (WT). Notably, the E1 lines exhibited leaves with a narrowly shape and wavy serrated leaf margins, while the WT leaves displayed spreading leaf margins (Fig. 5d). Moreover, the overexpression OfTCP13 significantly influenced leaf thickness and blade length to width ratio. The leaf thickness measured 0.76 ± 0.08 mm in transgenic lines and 0.53 ± 0.06 mm in the WT, whereas the blade length to width ratios were 0.35 ± 0.24 and 0.28 ± 0.30 in transgenic lines and the WT, respectively (Fig. 5e, f). Then the cross-section of the leaves was further investigated using paraffin sections, we found an increased number of mesophyll cells in the leaf tissue of the overexpressed plants compared to the WT (Fig. 5g). These results suggest that overexpression of OfTCP13 can influence plant leaf morphology. Discussion TCP transcription factors are ancient proteins that have been widely identified in various plant species, such as Arabidopsis [15] , rice [16] and maize [18] . Most studies have demonstrated that TCP transcription factors play critical roles in the process of plant growth and development. In this study, a total of 39 TCP genes were identified by the comprehensive analysis of the O. fragrans genome. These genes were classified into three main groups based on their phylogenetic relationships (Fig. 1). Furthermore, these genes were found to be distributed across 18 chromosomes in O. fragrans (Supplemental Fig. S1). Similar to other gymnosperms and angiosperms, such as in Populus, Vitis vinifera, and rice, we proposed that TCP genes in O. fragrans evolved into a larger family through gene duplication and diversification [7] . Among the class I OfTCP members, a highly conserved bHLH domain was identified, which is crucial for DNA binding in the basic region of the TCP N-terminal. Additionally, the C-terminal HLH region is involved in the formation of homodimer or heterodimer [34,35] . Notably, the class I OfTCP proteins exhibit distinctly differences in their motif compositions, while only motif 1 and motif 2 are present in the all class I TCP transcription factors. It is implied that the presence of different motifs in class I OfTCP proteins supports the differentiation of its functions. The phylogenetic tree analysis further revealed that class I OfTCP proteins of the same branch may be functionally similar, while those from different branches likely have complementary or different functions (Fig. 2). Leaves serve as crucial ornamental organs for garden plants, exhibiting considerable variations in morphology across different plant species or cultivars. Furthermore, leaf morphology of leaves undergoes transformations as they age and in response to specific environmental conditions in which the plant species thrives [36,37] . In Arabidopsis, the TCP family of proteins has been extensively documented for its fundamental involvement in the regulation of leaf shape. For instance, AtTCP23 gene illustrates its influence on leaf borders, as evidenced by contrasting phenotypes observed in the overexpression and knockout lines [38] . However, there are limited reports available regarding the role of TCP transcription factors in regulating of leaf morphology specifically in O. fragrans. Therefore, in order to elucidate the potential involvement of TCP transcription factors in shaping leaves in O. fragrans, we conducted a comprehensive analysis of the expression of class I TCP family members a b c 2,000 4,000 6,000 8,000 10,000 12,000 14,000 across various tissues (Fig. 3). The outcomes revealed that all 16 OfTCP genes were expressed in the all-tested tissues, with notable higher expression levels of OfTCP5, OfTCP13, OfTCP22 and OfTCP37 observed in leaf tissue relative to other tissues. The results suggests that these four TCP genes may act as important regulators of leaf morphology in O. fragrans. As a next step, we isolated the sequence of the intriguing OfTCP13 gene from the O. fragrans cultivar 'Yanhonggui' for subsequent investigations. By means of subcellular localization analysis, we determined that OfTCP13 exclusively localized in the nucleus (Fig. 4a), consistent with previous observations in other plant species [39] . And the transcriptional activation assay revealed that OfTCP13 does not possess the capability to activate transcription on its own (Fig. 4b), indicating that it may require collaboration of other transcription factors to exert its regulatory function. TCP transcription factors have been implicated in promoting leaf growth and development by regulating processes such as cell proliferation and enlargement [40] . For instance, Arabidopsis mutants of TCP2, TCP3, TCP4, TCP10, and TCP24 result in a phenotype where the leaves exhibit an increase in both size and curvature [41] . Additionally, the transcription factors AtTCP2 and AtTCP3 have been found to synergistically activate the expression of NGATHA (NGA), which controls leaf margin formation [42] . In the case of chrysanthemum, CmTCP20 not only have effect on the regulation of petal cell growth but also induces elongation of Arabidopsis rosette leaves [43] . In our study of O. fragrans, we constructed overexpression lines of OfTCP13 in tobacco and observed that overexpressed OfTCP13 resulted in leaves with a narrower shape and wavy serrated margins compared to WT plants (Fig. 5c, d). Notably, the overexpressed lines exhibited a significant increase in the leaf thickness and blade length-to-width ratio in comparison to the WT plants (Fig. 5e, f). Moreover, the examination of paraffin sections showed a higher number of mesophyll cells in the transgenic tobacco plants (Fig. 5g). These results suggested that OfTCP13 potentially regulates leaf thickness and blade length-to-width ratio by promoting the mesophyll cell number. It is noteworthy that plant hormone auxin plays a crucial role in determining the final morphology of leaves, as it is involved in both leaf formation and the development of marginal outgrowths that contribute to the overall shape of the leaf [44] . Hence, it is plausible to speculate that there might be a correlation between OfTCP13 and auxin in promoting the proliferation of mesophyll cells in O. fragrans [45−48] . Conclusions In summary, a total of 39 OfTCP genes were identified from the O. fragrans genome according to the conserved bHLH domain and phylogenetic relationship. Among these, the class I TCP gene OfTCP13 was found to be crucial in the regulation of leaf morphology by promoting the proliferation of mesophyll cells in O. fragrans. These findings contribute to our understanding of how TCP transcription factors govern the diversity of leaf morphology in O. fragrans and provide a solid theoretical foundation for further research in this field.
5,598.6
2023-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
DBSP_DRP: A Python package for automated spectroscopic data reduction of DBSP data DBSP_DRP is a python package that provides fully automated data reduction of data taken by the Double Spectrograph (DBSP) at the 200-inch Hale Telescope at Palomar Observatory (Oke&Gunn, 1982). The underlying data reduction functionality to extract 1D spectra, perform flux calibration and correction for atmospheric absorption, and coadd spectra together is provided by PypeIt (Prochaska et al., 2020). The new functionality that DBSP_DRP brings is in orchestrating the complex data reduction process by making smart decisions so that no user input is required after verifying the correctness of the metadata in the raw FITS files in a table-like GUI. Though the primary function of DBSP_DRP is to automatically reduce an entire night of data without user input, it has the flexibility for astronomers to fine-tune the data reduction with GUIs for manually identifying the faintest objects, as well as exposing the full set of PypeIt parameters to be tweaked for users with particular science needs. DBSP_DRP also handles some of the occasional quirks specific to DBSP, such as swapping FITS header cards, adding (an) extra null byte/s to FITS files making them not conform to the FITS specification, and not writing the coordinates of the observation to file. Additionally, DBSP_DRP contains a quicklook script for making real-time decisions during an observing run, and can open a GUI displaying a minimally reduced exposure in under 15 seconds. Docker containers are available for ease of deploying DBSP_DRP in its quicklook configuration (without some large atmospheric model files) or in its full configuration. DBSP_DRP is a python package that provides fully automated data reduction of data taken by the Double Spectrograph (DBSP) at the 200-inch Hale Telescope at Palomar Observatory (Oke & Gunn, 1982). The underlying data reduction functionality to extract 1D spectra, perform flux calibration and correction for atmospheric absorption, and coadd spectra together is provided by PypeIt (Prochaska et al., 2020). The new functionality that DBSP_DRP brings is in orchestrating the complex data reduction process by making smart decisions so that no user input is required after verifying the correctness of the metadata in the raw FITS files in a table-like GUI. Though the primary function of DBSP_DRP is to autmatically reduce an entire night of data without user input, it has the flexibility for astronomers to fine-tune the data reduction with GUIs for manually identifying the faintest objects, as well as exposing the full set of PypeIt parameters to be tweaked for users with particular science needs. DBSP_DRP also handles some of the occasional quirks specific to DBSP, such as swapping FITS header cards, adding (an) extra null byte/s to FITS files making them not conform to the FITS specification, and not writing the coordinates of the observation to file. Additionally, DBSP_DRP contains a quicklook script for making real-time decisions during an observing run, and can open a GUI displaying a minimally reduced exposure in under 15 seconds. Docker containers are available for ease of deploying DBSP_DRP in its quicklook configuration (without some large atmospheric model files) or in its full configuration. Statement of Need Palomar Observatory, located near San Diego, CA, is a multinational observatory with a broad user base. Users come from large and small institutions, and their observing experience ranges 1 arXiv:2107.12339v1 [astro-ph.IM] 26 Jul 2021 from novice to expert. One responsibility for serving such a diverse user base is to provide software data reduction pipelines for the most frequently used instruments, such as the Palomar Double Spectrograph (DBSP). Although DBSP was commissioned in 1982, it remains the workhorse instrument of the 200" Hale Telescope. It is used on 42% of the nights in a year, comprising nearly all of the valuable "dark" (moonless) time. In previous years, standard astronomical practice left the data reduction up to the user. However, attitudes in instrument building have shifted since DBSP was built. The pipeline is now considered an indispensable component of the astronomical instrument. In fact, the difference between a good pipeline and a great pipeline means the difference between counting some of the photons vs. counting all of the photons. Spectroscopy is a severe bottleneck in time-domain astronomy; currently less than 10% of discoveries are spectroscopically classified. Without a pipeline, data reduction is a difficult process and the standard method without a pipeline is to use IRAF, a 35 year old program on which development and maintenance was discontinued in 2013 and whose use is discouraged by many in the field e.g. Ogaz & Tollerud (2018). Needless to say, data reduction sans pipeline is extremely time-consuming. There is a clear need for a modern and stable automated data reduction pipeline for DBSP. During observing runs, one would like to be able to quickly inspect data as it is taken, in order to ensure that it is of sufficient quality to do the desired science with. For objects whose brightness may have changed between a previous observation and the observing run, the observer may have uncertainties regarding how long of an exposure is needed to produce quality data. For very faint objects or objects in crowded fields, the observer may not even be sure that the telescope is pointed at the right object! A quicklook functionality, that can do a rudimentary reduction to correct for instrumental signatures and subtract light from the sky, revealing the spectra of the objects observed, can answer questions of exposure time and whether the object observed is the right one. DBSP_DRP is currently being used by the ZTF Bright Transient Survey Perley et al., 2020), the ZTF Census of the Local Universe , and a program investigating ZTF Superluminous Supernovae (Lunnan et al., 2020;Chen et al., in preparation). Ravi et al. (2021) is the first (known) publication that used DBSP_DRP for data reduction. The development of DBSP_DRP also lays the groundwork towards a fully automated pipeline for the Next Generation Palomar Spectrograph that is planned to be deployed on the Palomar 200-inch Hale Telescope in 2022.
1,391.6
2021-07-26T00:00:00.000
[ "Computer Science", "Physics" ]
THE INFLUENCE OF COVID-19 ON ECONOMICS IN INDONESIAN HEALTHCARE AND FINANCE ABSTRACT INTRODUCTION Corona virus was first discovered in Wuhan, China with cases in December reaching 27 cases and spreading very easily until the end of December 266 cases had been recorded. At the beginning of 2020, 381 cases were recorded in China. Since the virus appeared in China, Chinese industries have been forced to close businesses and factories, this has also had an impact on industry in the country because Indonesia is a country that has business relations with China and many goods are exported and imported directly from China. About 29% of goods exported by China for raw and auxiliary materials come from Indonesia, the decline in commodities and mining goods will have an impact on the income of workers in this sector. Indonesia's economy is still dependent on commodities and mining goods, if purchasing power decreases, there is no incentive for entrepreneurs to increase their investment. The restrictions imposed by China have disrupted the availability of imported goods from China which has resulted in industries or sectors whose raw materials or capital goods originate from China will have their production process disrupted, this also has an impact on consumer goods, if local availability is not available then prices will increase, until now the economy Indonesia has been disturbed or it can be said that there have been many declines in stock prices in various sectors. Santi from Kompas.com (2021) stated that the closing of trading session 1 on the Indonesia Stock Exchange (IDX), closed the Jakarta Composite Index and decreased by 2.94% or 151.54 points at the level of 5,002.55. WHO announcement raised the status of the corona virus to a pandemic. This announcement causes market participants to transfer assets to safer investment instruments. Riyandi from Ayobandung.com (2022) quoted data from Mirae Asset Sekuritas Monday (23/3/2020) also reported that shares decreased by 24 percent in the telecommunications sector such as PT Telkom Indonesia Tbk (TLKM). Not only stocks in the telecommunication sector but this also had an impact on other sectors such as PT Indosat Tbk (ISAT) which fell to a level of 52%, in the health sector such as hospitals and pharmaceuticals which also experienced a decline in share prices due to the virus epidemic. Seeing the various declines in these stocks, the Indonesian economy in 2020 and in the coming years will be very dependent on the handling of the corona virus pandemic. Investors are taking various actions to deal with the corona virus pandemic. Many investors have disbursed their investment instruments due to the corona virus pandemic (Endraria, 2022;Ichsan, 2021). The process of rebalancing the portfolio must be carried out in order to minimize losses that may occur in the midst of the pandemic (Haitao & Ali, 2022;Jogiyanto, 2013;Tandelilin, 2010). The Indonesian government is trying to prevent transmission by one of them implementing Large-Scale Social Restrictions (PSBB). Regulations regarding PSBB issued by the Ministry of Health (Kemenkes) in the context of accelerating the handling of COVID-19 so that they can be immediately implemented in various regions (Alnizar & Manshur, 2022;Aviariska, 2020). The PSBB rules are listed in the Minister of Health Regulation Number 9 of 2020. Secretary General of the Ministry of Health Oscar Primadi in his written statement said the PSBB covers restrictions on the activities of certain residents in an area suspected of being infected with COVID-19. These restrictions include closing schools and workplaces, restrictions on religious activities, restrictions on activities in public places or facilities, restrictions on socio-cultural activities, restrictions on modes of transportation, and restrictions on other activities specifically related to aspects of defense and security (Mawar et al., 2021;Napitu et al., 2020). Cases and deaths from COVID-19 disease significantly and rapidly and have epidemiological links with similar events in other regions or countries. PSBB is carried out during the longest incubation period and can be extended if there is still evidence of spread (Herdiana, 2020). The Minister of Health explains that schools and workplaces are closed except for strategic offices or agencies that provide services related to defense and security, public order, food needs, fuel oil and gas, health services, the economy, finance, communications, industry, export and import, distribution. logistics, and other basic needs (Mahadewi, 2021;Rohman, 2021;Safitri & Dewa, 2022). March 13, 2020 was a tense and historic date for capital market participants. The decision to temporarily freeze (trading halt) was taken by the Indonesia Stock Exchange because the Jakarta Composite Index (IHSG) was sharply corrected by 5%. Since President Jokowi announced the first case of covid in Indonesia on March 2, 2020, the JCI has been increasingly volatile after previously being influenced by the international market. The decline in the JCI to 6.5% occurred on March 9, 2020, after that, the Jakarta Stock Exchange has repeatedly implemented trading halts. The stock market is indeed sensitive to news about Covid 19 which has hit most countries in the world. Concerns about banking liquidity and an increase in non-performing loans were the main issues during the covid period (Ichsan et al., 2021;Kurniasari et al., 2023). With social distancing, laying off employees and a weakening economy, most creditors will find it difficult to repay their loans. The fear of economic uncertainty also makes people hesitate to deposit their funds in banks. This affects the amount of third party funds that can be managed by banks. Several factors affecting the health of the banking sector are the reason for the capital market to consider its decision. These conditions forced the Government to adopt financial policies. To maintain financial stability, maintain economic growth and optimize the banking intermediary function, the government issued several financial policies in anticipation of the impact of Covid 19. The Indonesian Financial Services Authority issued regulations regarding national economic stimulus as a measure to anticipate the impact of the spread of Covid 19 by issuing POJK No. 11/POJK.02/2020. In summary, this POJK regulates banking policies to support economic growth by providing leeway for debtors in fulfilling their obligations. This POJK applies to national banks, both BUK, BUS, UUS, BPR and BPRS. Some of the business sectors that received stimulus were tourism, transportation, hospitality, trade, processing, agriculture and mining. Allowance for settlement of debtor obligations with a ceiling of up to 10 billion. This policy targets MSMEs and the lower middle class who have been affected by Covid 19. Another policy is the restructuring of credit payment schedules without limiting credit limits. The Stimulus Provisions as a countercyclical policy on the impact of the spread of Covid-19 were promulgated on March 20, 2020 and are valid until March 31, 2021. Until this November, it can be said that the pandemic has slumped or has fallen to level 1. Transmission in Indonesia has reached below 1,000 people per day. However, in July 2021 there was a very high wave of the pandemic, so that the transmission of Covid-19 reached above 50 thousand people per day. Therefore the research will look at the differences in the economic impact from the start of the pandemic, to the peak of the pandemic and when the pandemic has started to disappear. This research was made to see how financial markets influence the POJK Policy Number 11/POJK.02/2020. Some time before the POJK was published, the money market had been affected by the spread of Covid-19 and also the movement of international stocks. The question is whether the government's stimulus policy has a positive effect on the national banking stock market or not. The effect can also be the other way around, with creditor restructuring it will disrupt banking liquidity, because some credit payments are delayed. The end of this situation is the decline in bank profitability. This can be a consideration for the capital market to invest in the banking market. The results of this study are expected to provide additional references regarding the impact that has occurred with the Covid-19 epidemic in Indonesia. These findings form the conclusion of the impact on the healthcare sector and the financial sector. For the health sector, it can be used as a basis for government policy in increasing health sector companies. Meanwhile, in the banking sector, the impact of credit payment jams has been felt and this has had an impact on the real sector. METHODS The type of research being carried out is a descriptive type of research that will use the study method for the announcement of the COVID-19 corona outbreak as a national emergency disaster in health sector companies and the banking sector listed on the Indonesia Stock Exchange. This research was conducted at 4 points of occurrence, starting from 3 March 2020 when the pandemic was first announced nationally, then when the banking policy was announced on 11 March 2021, when the second peak of the pandemic was in July 2021 and when levels 1-3 were announced in October 2021. The population in this study are indexes listed on the Indonesia Stock Exchange, the index referred to is companies in the health care and financial sectors. The sample used in this study uses a purposive sample, namely a sampling technique in which the research determines sampling by establishing special characteristics that are in accordance with the research objectives so that it is expected to be able to answer research problems. Based on these provisions, there are 15 companies in the health sector and 53 in the financial sector. The technique in this research uses secondary data collection techniques, secondary data collection techniques are techniques where research takes data obtained indirectly from the object or research subject. The data collection technique was done in a secondary way , namely by searching manually (via books, indexes, bibliography, references, and relevant literature) and searching online (via internet databases). RESULTS AND DISCUSSION This research retrieves data from the Indonesia Stock Exchange in 2020 related to the official announcement from the Indonesian government which was announced on March 11, 2020 . The data obtained are stock price data for the financial sector and the health sector. There are 105 lists of the financial sector and data on the health sector (healcare) there are 24. The list of stocks for each can be seen in Appendix-1. The share price seen from returns during January 2020 is as follows. From the table it can be seen that during January there has been a sharp decline. From the 1st trading day of 30.512% to the 20th trading day on January 31, 2020 it became -1.311%. When compared with the IHSG return, it is also almost the same, on January 1 2020 it was 0.635% while on January 31 2020 it was -0.907%. The decline in the JCI was slower than the average for the health sector in January 2020. In February 2020, the movement of stock returns will appear as follows: From the table it can be seen that during February also began a sharp decline. From the 1st trading day of 6.196% to the 20th trading day on January 31, 2020 it became -78.779%. When compared with the JCI return, it is also almost the same, in the 1st trade on February 1 2020 it was -1.941%. while on February 28 2020 it was -2,693%. The decline in the JCI was slower than the average for the health sector in January 2020. On the 2nd trading day, nearing the official Covid announcement, stock returns fell to -78.779%. This indicates that there are early symptoms of the Covid-19 event which was announced on March 12, 2020. For financial sector stocks in January 2020 can be seen as follows: From the table, it can be explained that the movement starting from the 1st trading day in January 2020, until the 20th trading day on January 31 2020 did not decrease smoothly. Even the last trade in January 2020 was better than the return earlier in the month. At the beginning of the month it was -2,799%, while at the end of the month it was -1,010%. Meanwhile, financial sector stocks in February 2020 can be seen as follows: d o n e s i a n J o u r n a l o f M u l t i d i s c i p l i n a r y S c i e n c e , 2 ( 9 ) , J u n e , 2023 last trade in February 2020 it was -0.988%, while at the end of the month it was -1.156%. This slight decline signals that there is not too much in the financial sector. Analysis To analyze the data the steps are as follows: 1) Looking for stock returns in the estimated period, for 2 (two) months, 1 January 2022 to 2 March 2022 for 60 estimated days or 42 trading days; 2) Make a search for α (alpha) and β (beta) stocks to create a regression equation; 3) With this equation, the expected return is calculated in the window period, in this study for 15 trading days (3 March 2022 to 23 March 2022); and 4) Calculating the average expected return and realized return. 5) Analyze with the t test to test whether there is a difference between expectations and realization. Analysis for the Health Sector During the estimated return period that occurs as shown in table 4.1 and table 4.2. The results of the analysis during the estimation period can be seen in the attachment. From the data obtained, a regression model can be calculated which is calculated by regressing stock returns on IHSG returns. The models that can be made are as follows: Y = -0.004 + 0.199 X. ………………………………………(4-1) By entering the JCI return into the regression equation, it will be possible to find the expected return during the Window period. The following period data window; The 5th analysis is to look for whether there is a significant difference in the Window period. If it is significantly different, it means that there is an abnormal return/average abnormal return. The results of the analysis can be seen in the following table: Analysis for the Financial sector For financial stocks, data returns in tables 4.3 and 4.4. The models that can be made are as follows: Y=-0.002 -0.147 X. ………………………………………(4-1) From the regression results, the expected return data is generated which is then compared with the realized return. The data that has been processed are as follows: Source: processed IDX data CONCLUSION The first conclusion for health sector stocks. From day minus 7 to day H+7 all are significant, meaning that the official announcement of COVID-19 has affected the healthcare sector stock returns on the Indonesia Stock Exchange after the announcement of the COVID-19 virus outbreak as a national disaster. On the day of the event, namely the announcement of Covid-19 as a national epidemic, the market was affected. Then for the financial sector, on minus 7 days to D+7 days all are significant, except at the beginning of the window. meaning that the official announcement of COVID-19 affected the stock returns of the financial sector except on the first day, on the Indonesia Stock Exchange after the announcement of the COVID-19 virus outbreak as a national disaster. On the day of the incident, namely when the national epidemic was announced, a significant 1%.
3,558.8
2023-06-28T00:00:00.000
[ "Economics", "Medicine" ]
Aggregation‐induced emission materials for nonlinear optics Aggregation‐induced emission (AIE) is a vital photophysical phenomenon that the luminogens in the concentrated or aggregated cases will engender the dramatically boosted emission in comparison with the dispersive states. Given this extraordinary emitting capacity exactly resolves the aggregation‐caused quenching (ACQ) situations residing in the traditional luminophores, the booming AIE luminogens have drawn tremendous interest owing to their advanced performances and colossal potential applications in various areas. Further exploitations of AIE molecules also drive the research interests in the midst of these AIE materials toward the nonlinear optical (NLO) regime. The combination of AIE and NLO effects have nurtured some unforeseen properties of AIE materials and extended their application spheres. Therefore, some NLO‐active AIE materials have been wielded in many crucial applications, for example, optical limiting, laser, bioimaging, and photodynamic therapy. Meanwhile, the impacts of aggregate on the NLO effect also deserve deep considerations and pursuits, and the modifications of aggregates promise an easy, efficient, and prompt avenue to tune the NLO properties of materials. The recent achievements and progress in the NLO properties of AIE materials have been summarized in this review. The second‐order and third‐order NLOs of the AIE materials have been introduced and their correlative applications have been discussed. INTRODUCTION The proposed concept of "aggregation-induced emission (AIE)," which the originally weak or nonemissive luminescent chromogens are induced to emit intensively at the aggregate states, blazes a new trail for the advancements and innovations of luminescent materials. [1] Hereafter, the spring and advanced theories including crystallization-induced emission (CIE), [2] clusterization-triggered emission (CTE), [3] and polymerization-induced emission (PIE), [4] with the AIE as basis and core, have also cultivated and accelerated the buildup of diverse and versatile AIE luminogens (AIEgens) and the corresponding luminescent materials. Certainly, in pace with the booming flourish and thrive of luminescent materials, the numerous aggregates of AIE materials, e.g., nanoparticles (NPs), [5] crystals, [6] clusters, [7] polymers, [8] supermolecules, [9] gels [10] and composites, [11] endow them with astonishing flexibility, marvelous tailoring capabilities, and even exceptional robustness. [12] Thereby, AIE materials not only share extensive attentions in the scope of solidstate emission, [13] but are also highly favored in optoelec-This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. Aggregate published by John Wiley & Sons Australia, Ltd on behalf of South China University of Technology and AIE Institute tronic systems, [14] stimuli responses, [15] anticounterfeiting security, [16] biomedicine, [17] and sensors. [18] Heretofore, in the developed AIEgens, a portion of such luminogens have been anticipated or determined to be nonlinear optical (NLO) active with the charming NLO coefficients. In particular, the AIEgens featuring the D-A or D-π-A analogous structures, the intramolecular charge transfer (ICT) characteristics, [19,20] and transition dipole moment [21,22] may arouse the conversions toward the NLO activity. The ICT states can be depicted by two forms, i.e., neutral and zwitterionic forms, and the high NLO hyperpolarizabilities of these molecules will be acquired if the impact of the two forms is optimized. [23,24] In other words, the strength of donor and acceptor, the π-conjugated distance, and the symmetry exert the manifest effect on the NLO properties of the D-A systems. [25,26] Nevertheless, the emissions of these D-A compounds are always weak in the aggregate state, limiting the application greatly, and the AIE effect can resolve the predicament. [27] In addition, the D-A configurations also afford the diversified electrical and photonic activities to the AIEgens. [28] Additionally, the substantial explorations and investigations on the NLO properties of AIEgens have authenticated the effects of aggregationinduced enhanced NLO. The NLO diversifications derived from the diverse inherent aggregates in materials open up the innovate approaches to tailoring the NLO properties of materials conveniently and efficiently. The researches in the NLO properties of AIEgens also contribute to the extension of their applications and supply the opportunities to dig out, design, and develop the comprehensive and versatile AIE materials. In this review, we have retrospected the developments and advancements of these significant AIE materials. The chief NLO effects of AIE materials and the associated NLO applications have been highlighted and discussed. Ultimately, the state of the art and the prospect futural development tendency have been clarified. Pure organic AIE materials In 2001, Tang group discovered the first AIE molecule 1methyl-1,2,3,4,5-pentaphenylsilole (MPPS) with the core of silole and explored its turn from the weak luminogen in dilute solution into the strong emitter in aggregates. [29] Afterward, a mass of organic molecules, polymers, and their corresponding derivatives with AIE trait surged out, including hexaphenylsilole (HPS), tetraphenylethene (TPE), and Schiff-bases. [30] These compounds are often innate with aromatic moieties for the efficient emission and associated with twisted molecular conformations to suppress the intermolecular π-π stacking interactions. [1,31] In the aggregation states, the AIE processes are closely linked to the theoretical channels such as restriction of intramolecular rotation (RIR), twisted intramolecular charge transfer (TICT), [32,33] J-aggregation, [34] and excited-state intramolecular proton transfer (ESIPT). [35] Actually, the RIR is the primary factor to arouse the AIE effect. [36] HPS, a well-known AIEgen, can be regarded as the derivative of MPPS. [37] The six peripheral benzene rings can revolve round the central silole when molecularly dispersed in good solvents, but be blocked in the aggregation states such as crystal, film, or nanostructures, impeding the nonradiative energy loss and brightening the emission. The numerous substituents can serve to modify the silole structures via facile chemical reactions, and offer them with the plentiful AIE activities and other unique photophysical properties. [38] Furthermore, the AIE nature affords the silole derivatives with σ*-π* conjugation bearing low-lying lowest unoccupied molecular orbital (LUMO) levels, which are appropriate to the light-emitting diodes (LEDs). Nie et al. conceived and prepared the simplified and high-performance organic light-emitting diodes (OLEDs) in the light of the AIE and photonic characters of siloles. [39] The four silole derivatives, (PBI) 2 DMTPS, (PBI) 2 MPPS, (PPI) 2 DMTPS, and (PPI) 2 MPPS, composing of 2,3,4,5-tetraphenylsilole core, 1-phenyl-1H-benzo[d]imidazole (PBI), and 1-phenyl-1H-phenanthro[9,10-d]-imidazole (PPI) substituent groups were synthesized (Figure 1(A)). In contrast with the watergoverning THF/H 2 O mixtures, a high photoluminescence quantum yield Φ PL of 49.5%-62.1% in solid states was revealed, implying the splendid AIE attributes. In virtue of the synergistic effect of the silole and PBI units, high elec-tron mobilities of (PBI) 2 DMTPS and (PBI) 2 MPPS conferred them with superior electroluminescence (EL) performance in the nondoped OLED (Figure 1(C)). Chen et al. synthesized another three silole derivatives, (MesB) 2 DMTPS, (MesB) 2 MPPS, and (MesB) 2 HPS (Figure 1(D)), all carrying the dimesitylboryl groups, to manufacture the efficient OLEDs. [40] The three AIEgens emitted very weakly in the good solvents, while they became highly bright emitters in solid film with the upgraded Φ PL (56%, 58%, and 62% for (MesB) 2 DMTPS, (MesB) 2 MPPS, and (MesB) 2 HPS, respectively). The double-layer OLED device ITO/NPB (60 nm)/(MesB) 2 HPS (60 nm)/LiF (1 nm)/Al (100 nm) was afforded with protruding EL performance: 13.9 cd /A (maximum luminance), 4.35% (maximum external quantum efficiency), and 11.6 lm/W (maximum power efficiency), respectively. Moreover, the siloles frequently act as the central skeleton to bridge the chiral groups in chiral AIEgens. [41] . Ng et al. reported the AIE compound 1 with thiourea linkers and chiral phenylethanamine groups, whose solid state could radiate strong green fluorescence with the ultrahigh quantum efficiency of 95% compared with the molecular dissolution. [42] Interestingly, it was not circular dichroism (CD)-active upon photoexcitation, yet the visible CD and circularly polarized luminescence (CPL) signals could be detected upon the complexation with the particular chiral acids such as mandelic acid (Figure 1(I)). Liu et al. introduced the double mannose side chain to the HPS core via Click chemistry. [43] The resultant chiral AIE siloles gave the large dissymmetry factors (g em ) of −0.32 at the self-assembly aggregates in the confined microchannel surrounding. TPE, an archetypal hydrocarbon of AIEgens, which boasts of a propeller-like conformation composed of the central olefin stator and terminal phenyl rings, can be obtained readily from one-step reaction. [44,45] In its dilute solution, the relatively free circumstance furnishes the essential prerequisites for the phenyl rings rotating smoothly on the C-H axis, and the intramolecular rotations deplete the energy of excited state dramatically, resulting in the fluorescent quenching. [44] Contrarily, in the aggregation/solid state, the rotations and π-twist are obstructed by the dense intermolecular C-H•••π interactions, and thus the radioactive channels are rejuvenated, which provokes the enhanced luminescence. [46,47] Hitherto, TPE and its derivatives still account for the majority of AIE-active molecules. [44,48] Aside from the AIE behaviors, TPE-based AIEgens are often susceptible to the external stimuli. [49,50] Xie et al. acquired a simple TPE hydrocarbon TETPE (1,1,2,2tetrakis(4-ethynylphenyl) ethene) via McMurry Coupling reaction, [51] implying the mechanoluminescence (ML) analogous to other heteroatom TPE compounds. [49] Liu et al. coupled TPE with anthracene (AN) to gain the AIE molecule 9-(3-(1,2,2-triphenylvinyl)phenyl)-anthracene (mTPA-AN) via Suzuki reaction. [52] Distinguished from the pressure-induced red-shifted and subdued emission of the common organic piezochromic materials, its emission underwent an abnormal blue-shift (Figure 2(A)). In the crystals of mTPE-AN, the TPE units hindered the π-π interactions between the two adjacent AN dimer, the complete energy transfer (ET) generating the single emission band from the AN excimer. Once the pressure exceeded 1.23 GPa, the compact aggregate of TPE units not only inspired the AIE mechanism for the raised emission, but also suppressed the ET [39] (D-F) Reproduced with permission: Copyright 2014, Wiley-VCH. [40] (H and I) Reproduced with permission: Copyright 2014, Royal Society of Chemistry [42] interaction, contributing to the high-energy emission wavelength. For the sake of realizing multidimensional anticounterfeiting, Huang et al. designed a TPE derivative compound 2, satisfying the requirements for multiresponses synchronously. [16] The compound 2 exhibited classical AIE attribute in the THF/H 2 O mixture. From its crystal structure, it could be found that the interactions between the four phenyl rings were very weak, encouraging its loose packing in the crystals and enabling its sensitivity to the external stimuli. After grounding the powder (2p-o) of compound 2, the emission (λ em = 450 nm, Φ F = 24.8%) of ground powder (2p-g) was unaltered. When 2p-p was fumed by dichloromethane vapor within 10 s (2p-f), the original cyan emission was transformed into the blue-emission. Annealing of 2p-g and 2p-f at 120 • C resulted in the immediate quenching of their bright fluorescence (Figure 2(F)). Both the annealed powder (2p-h) and the crystal of compound 2 experienced reversible color switch to deep red after the 365 nm irradiating within 1 min. The UV/Vis reflection spectra suggested that the photochromism of compound 2 might be ascribed to photocyclization of stilbene units (Figure 2(G)). Actually, the deeper explorations concerning AIE mechanism of TPE demonstrate the impact of E/Z isomerization (EZI) on AIE process. [53] Tian et al. reported several pairs of configuration-controllable AIE E/Z isomers revolving around TPE, and all the Z isomers showed obvious piezofluorochromic phenomena after grounding. [54] Wang et al. prepared the TPEcored luminogen 1,2-bis{4-[1-(6-phenoxyhexyl)-4-(1,2,3triazol)yl]phenyl}-1,2-diphenylethene (BPHTATPE) by the copper-catalyzed Click reaction and successfully isolated the two pure E/Z stereoisomers. [53] The E/Z-BPHTATPE expressed the astonishing AIE effect (α AIE ≥ 322) with the marked Φ F of 100% (Figure 2(H)). The EZI process of the two isomers could occur with their exposure to the UV light at the temperature excessing 200 • C. The crystals of the Z isomer assembled in the poor crystallization quality. In comparison, the E encounter could self-organize into the high-order microstructures. The grinding, exerting pressure and fuming all could cause the varieties of their emission spectra. Polymerization is a seminal route to manipulate the light emitting properties of luminescent materials, and the polymer networks also furnish the chromophores with braced frames, mechanical and chemical protections to engineer the smart, stable and flexible luminescent materials. [55] Suleymanov et al. prepared the TPE-Tr (triphenylethenyl triazene) from the swift and versatile acid-induced coupling. [56] The TPE groups could be grafted to the polymers with arenes directly by the vinylation procedures, for instance, harvesting [52] (F and G) Reproduced with permission: Copyright 2019, Wiley-VCH. [16] (H) Reproduced with permission: Copyright 2012, American Chemical Society. [53] (I) Reproduced with permission: Copyright 2019, Wiley-VCH. [59] (J) Reproduced with permission: Copyright 2019, Wiley-VCH [60] the TPE-polymers: TPE-PS (polystyrene), TPE-PC (parylene C), and TPE-PVK (polyvinylcarbazole) conveniently. The variant swelling solvents had the appreciable influence on the fluorescence intensity of polymer TPE-PS suspensions for the reason of the varied conformational freedom of the TPE groups in solvents. Imitating the preparation of graphdiyne, [57,58] Liu et al. polymerized the TPE groups through the diyne coupling to fashion the two-dimension (2D) fluorescent polymer (Figure 2(I)). [59] In this 2D structure, the sufficient acetylene bonds locked the TPE building blocks, leading to its highly luminescent property. Wang et al. built the supramolecular polymer networks (SPNs) for efficient AIE regulation with the TPE-based tetratopic guests and pillar [5]arene polymer hosts (Figure 2(J)). [60] The noncovalent cross-linking in the host-guest systems could adjust the fluorescence intensity by changing the guest binding sites, pillar [5]arene unit density, solvent, or temperature. Thus, the considerably high Φ F of 98.22% could be reached. Schiff-bases AIE materials are affiliated to the heteroatomcontaining AIE family, and the superiorities including fairly simplified synthetic procedures, [61] convenient purification process, and variable AIE responses render them to be geared toward the diversified spheres. Especially, the easy coordination with metal ions enable them to be satisfactory candidates for developing fluorescent probes. Xie et al. developed the TPE-functionalized salicylaldehyde Schiff-base TPE-An-Py via the Knoevenagel condensation with the yield of 60%. [61] Its emission behaviors in the THF, THF/H 2 O, and solid state certificated the aggregation-induced enhanced emission (AIEE) and the typical ESIPT process. This TPE-An-Py fluorescent probe, whose solution turned yellow from colorless under sunlight and fluorescence was quenched evidently, signified the specificity and selectivity for the detection of Cu 2+ (Figure 3(A)). In the practical water samples, the TPE-An-Py could receive the content recoveries with the low detection limit of 2.36 × 10 -7 M. Chai et al. explored the optical properties of the Schiff-base compound 3 on the basis of its AIE effect. [62] In the crystal state, the Schiff-base 3 exhibited the reversible color switch from the colorless to the apparent yellow upon UV irradiation (Figure 3 [61] (B and C) Reproduced with permission: Copyright 2018, Royal Society of Chemistry. [62] (D) Reproduced with permission: Copyright 2019, Nature. [67] (E-G) Reproduced with permission: Copyright 2018, Royal Society of Chemistry [68] to the formation of the strong combination of Cu 2+ to the Schiff-base ( Figure 3(C)). Generally, these traditional AIEgens are often devised with the single or multiple aromatic motifs considering the strong electron conjugation, [63,64] yet there are also some peculiar AIEgens without aromatic systems. [65,66] Zhao et al. unfolded the typical AIE effect form the extraordinarily nonaromatic annulene derivative cyclooctatetrathiophene (COTh). [67] The akin propeller-like conformations in TPE or HPS were not retained in the COTh. Beyond this, the COTh was a conformationally chiral structure. Thus, the CD and CPL spectroscopy could serve to reflect the dynamical change of the chiral properties. In the THF solution, the CD signals of the enantiomers gradually degraded and almost vanished after UV irradiation for 14 min (Figure 3(D)). However, in the solid state of COTh, the CD signals illustrated no obvious difference. Similar situation also occurred in the CPL spectroscopy, clarifying confined conformation inversion in the solid state. The discovery of the nonaromatic AIEgen supplied a brand-new strategy to tune the molecular vibration in AIE systems. Fang et al. heeded the persistent room temperature phosphorescence (RTP) from the nonaromatic organic compound cyanoacetic acid (CAA) with the considerably low molecular weight of 85. [68] The green RTP emission with a long lifetime of 0.862 s and RTP quantum efficiency of 2.1% could be spotted from the CAA crystal ( Figure 3(G)). On the contrary, with the concentration quenching effect of the ordinary luminophores, the fluorescence and phosphorescence were both converted from the nonemission in dilute solution to the brilliant emission at high concentration, signifying that the aggregation played the key role in the formation of AIE and the persistent RTP. The analysis of CAA crystal confirmed the hydrogen bonds between carbonyl and hydroxyl groups in the packing mode facilitated the restrictions of intramolecular motions and the boosted phosphorescence and fluorescence in aggregates. Zheng et al. reported a highly twisted nonaromatic AIEgens with tunable optical behaviors, consisting of acylated succinimides. [69] At room temperature, the photoluminescence and afterglows of AIEgens N,N'-carbonylbissuc-cinimide (CBSI), and N,N'-oxalylbissuccinimide (OBSI) in crystal state varied Society. [88] (C and D) Reproduced with permission: Copyright 2020, Wiley-VCH. [91] (E and F) Reproduced with permission: Copyright 2019, American Chemical Society [93] along with the change of excitation wavelength (λ ex ), which was more apparently at cryogenic conditions. The crystal structure analyses disclosed that the effective through-space conjugation (TSC) from the n/π electrons facilitated the formation of diverse clusters, resulting in the tunable optical properties. In summary, there may not be integral π-conjugation residing in these irregulated AIEgens, but some substituted groups, e.g., carbonyl, imide, cyano with lone pair electrons will encourage the electron delocalization and establish the valid and stable TSC in the rigid molecular aggregates. [70,71] With regard to some other nonaromatic AIEgens, the vibration deriving from the aromaticity reversal [72] will be suppressed in the aggregates, [73] rendering the bright emission. Organometallic AIE materials Similar to the Schiff bases, some of the AIEgens have been incorporated into the constituents of organometallic materials such as the metal-organic frameworks (MOFs) [74][75][76][77] and metallosupramolecules, [78] to forge the luminescent materials with the tunable optical properties [79,80] owing to the tunable structures and variable compositions. [81,82] The insert of varied metal ions into organometallic AIE materials will draw forth some special electronic and optical properties. For example, the easily accessible transition from the S 1 to the T 1 leading to the phosphoresce, [83] which is usually tough to achieve for pure organic AIEgens. Furthermore, the sundry functional groups can be incorporated into the AIE organometallics due to the diversity of the metal ions and organic ligands, and thus tune the quantum yields, lifetime or emissive color more flexibly. [74] However, the high preparation cost and the inevitable toxicity [84] are also the challenges that cannot be ignored. Different from the plenitude of organic AIE molecules, the AIE metallosupramolecules [9] are relatively rare owing to the finite combination modes of AIEgens with metallosupramolecular architectures. [92] Liu et al. assembled a string of organoplatinum isostructural metallaprisms consisting of the central Pt(II), linker 4,4′-bipyridine, and the triangular ligands [tris(4-pyridyl)benzene (abbreviated as tpb) and tris(4-pyridyl)triazine (abbreviated as tpt)]. [93] The different coordination sequences also induced to generate the varied complexes ( Figure 4(E)). The emission spectra of tpb-based cis-1, trans-1, and the tpt-based cis-2 complexes affirmed the AIE emission upon expanding the hexane content in CH 2 Cl 2 solutions traced to the phenyl rotation hampered at the metal corners. The trans-2 was not AIE-active either in the solution or in the solid, ascribed to the subtle structural alterations between cis-2 and trans-2, which gave rise to the emission quenching ( Figure 4(F)). AIE MATERIALS FOR NLO In the light of the novel and amusing effect, AIE materials have drawn numerous and continuing attention since the initial discovery in 2001. In the area of solid-state emission, [94,95] the emergence of AIE materials derives intriguing phenomena such as aggregation-induced phosphorescence [96,97] and aggregation-induced delayed fluorescence (AIDF). [98,99] Nay, it is not content with the original and single investigations concerning AIE phenomena, but puts sight on the researches of aggregates. [100] Thus, some new and intriguing properties have been certificated and exploited, vastly employed in the myriad domains, involving therapy, [101] imaging, [102] solar cell, [14] chemical probe, [103] photodetector, [104] and data storage, [105] and so forth. [106,107] However, the majority of these applications about AIEgens revolve round the linear optics regime, hampering the indepth explorations of their applications. [108,109] The NLO responses of materials can only be spotted on the occasion of the intense light such as laser. [110,111] While the intense light impinges upon the optical media, the interaction between light and matter will arouse the relative displacement of electrons, impacting the impinging light conversely. [112] The complicated interferences between them lead to numerous NLO effects, such as second-harmonic generation (SHG), [113] sum-frequency generation (SFG), [114] optical rectification, [115] third-harmonic generation (THG), [116,117] two-photon absorption (2PA), [118] two-photon luminescence (2PL), [119] four-wave mixing (FWG), [120] optical Kerr effect, [121] saturable absorption (SA), [58] and so forth. Actually, some typical AIEgens, such as TPE, [109] have been anticipated and manifested to possess the NLO properties. Furthermore, the transformations and modifications of aggregate states can also be a facile and accessible channel to manipulate the inherent NLO and other properties [100] of AIE materials. It has been demonstrated that the aggregate of AIE materials can elevate the relative NLO responses, and can also lead to the switch of the NLO effect. [122] The conversion of the orientation and strength of molecular dipoles in AIEgens might be the main inducement for the variation of NLO effects. Second-order NLO in AIE materials The quadratic optical susceptibility of materials, i.e., χ (2) , is tightly correlative to the molecular hyperpolarizability β. [123] Noted that the β converges toward zero for the centrosymmetric material systems whereby it demands the asymmetry or noncentrosymmetric space groups for second-order NLOs. [124][125][126] SHG is the most developed and vital second-order NLO process in which the output frequency is twofold of the incident photon field. [127] Some of AIE molecules have been deemed to have the potential for second-order NLO and forecasted the SHG activity. For instance, Liu et al. designed the helical TPE compound with the inherent chirality and fixed propeller-like conformation, and predicted its second-order NLO properties via density functional theory (DFT) and time-dependent DFT calculations. [128] Nevertheless, the realistic reports about AIE molecules exhibiting SHG responses are rare. The classical AIEgen TPE also has been certified to be the SHG active. Astonishingly, the TPE molecules can crystallize in a polar P 21 space group, [1] talented to manifest the exceptional even-order NLO effects. In this context, Xiong et al. explored the NLO behaviors of TPE AIEgen. Noteworthy that the TPE microcrystals expressed the fantastic wavelength-dependent NLO effects that the various NLO phenomena resided in the diverse wavelength ranges. [109] Upon femtosecond (fs) laser excitations between 840 nm and 970 nm, the explicit and sharp SHG signals were captured ( Figure 5(B)). While the incident and the two-photon excited fluorescence character arose as the beam wavelength was below 800 nm. Moreover, the halogen substituted TPE molecules, including 4Br-TPE and 4I-TPE, and the correlate high-quality crystals could be acquired through the ordinary solvent evaporation (THF/n-hexane). The resultant 4X-TPE crystals also formed with a noncentrosymmetric space group (P2 1 2 1 2 1 ) ( Figure 5(C)). Analogously, the NLO emissions of 4X-TPE also evinced the remarkable wavelengthdependent ( Figure 5(D)), power-dependent ( Figure 5(E)), and polarization-dependent ( Figure 5(F)) NLO properties. For the wavelength longer than 800 nm, the NLO emission primarily behaved as SHG, and meanwhile, the 2PF emission dominated as the wavelength was between 740 and 800 nm. Apart from the traditional AIEgen TPE derivatives, the fluorenone and its derivatives also have been certified to be [109] the SHG active and tunable optical properties. [129][130][131] For the sake of the SHG active, some push-pull π-conjugated molecules with the moderate dipole moments are utilized as organic NLO materials since the large magnitude of the molecular dipole moment might lead to the centrosymmetric packing modes. [132] Duan et al. devised and synthesized a hexaphenylene derivative 2,7-di([1,1′-biphenyl]-4-yl)-fluorenone (abbreviated to 4-DBpFO) via introducing the carbonyl group into the original p-hexaphenylene (p-6P) matrix ( Figure 6(A)). [108] In the dilute CHCl 3 solution, the 4-DBpFO radiated the yellow-red emission at around 595 nm with a short lifetime of 1.9 ns. It exhibited the brighter green-yellow emission in the aggregated state, indicating the AIE characteristics. The corresponding quantum yield of fluorescence was dramatically elevated from 3.9% to 35.8% under the excitation at 440 nm, ascribed to the restricted rotations of the phenyl rings. In contrast to the p-6P skeleton featuring centrosymmetric configuration, the added carbonyl group broke the limit of centrosymmetry and also introduced a permanent dipole perpendicular to the long axis of molecules. Thus, the minor variation resulted in the SHG responses. Moreover, the disparate morphologies (microplate and microbelt) in crystal aggregate also exhibited the distinct SHG responses at 400 nm and 2PF responses at around 550 nm (Figure 6(C)). Wavelength-dependent SHG spectra from 770 to 960 nm showcased the maximum SHG signal at 445 nm ( Figure 6(D)). The SHG response of 4-DBpFO microplates indicated twice or three times stronger than its analogue diphenylfluorenone (Figure 6(E)). In addition, the 4-DBpFO crystal exhibited excellent laser damage threshold of 4.8 × 10 5 W cm -2 at the 880 and 970 nm pump laser. The push-pull structures or D-π-A systems are prevalent in AIE molecules owing to their standout properties, [133,134] e.g., strong and tunable intramolecular charge transfer, [,135] highly ordered crystal packings or assemblies. [136] Besides, being sensitive to polarity, the introduction of push-pull structures is an effective strategy to improve the molecular hyperpolarizability efficiently and induce the NLO hyperpolarizability. [137] Jiang et al. reported a push-pull compound 4 based on the diphenylamine (DPA) donor and dicyanovinyl acceptor. [138] The synergy mechanism between hydrophilic oligo-oxyethylene chain and hydrophobic dicyanovinyl groups conferred it amphiphilic properties ( Figure 6(F)), beneficial to the formation of mechanostimulable properties. The emitting fluctuation of this molecule in aggregation was examined in THF/H 2 O system, and the recorded spectra suggested the aggregation-caused quenching (ACQ) of the long-wavelength luminescence as well as the AIE of short-wavelength emission. Crystal structure analysis revealed the noncentrosymmetric arrangements from the highly dipolar push-pull structure, facilitating the conspicuous SHG. The more elaborate SHG effects of the D-π-A system indicated the mechano-or thermal-stimulated SHG responses (Figure 6(H)). In the D-π-A systems, the modification of organic groups always plays a key role in modulating the physical characteristics and shapes of molecules. David H (compound 5), NO 2 (compound 6)) launched dense fluorescence upon aggregation. [139] The PL spectra and the time-resolved luminescence spectra of ferrocenyl Schiff-bases in acetonitrile-water system disclosed that the [108] (F-H) Reproduced with permission: Copyright 2015, WILEY-VCH [138] enhanced emission intensity and longer fluorescence lifetime at a water content of 90%. Kurtz-Perry powder method was utilized to confirm the SHG efficiency of the crystals of compound 5 and 6. Surprisingly, the centrosymmetric crystal 6 (P 21 /c space group) output the unexpected SHG response at 532 nm in virtue of the noncovalent (C-H•••π) interactions. In contrast to the Schiff-base 5 without SHG signals, the existence of strong electron-withdrawing group (-NO 2 ) extremely hoisted the NLO property for compound 6, 1.46 times higher than the standard urea material. Actually, the introduction of chiral groups is a convenient avenue to manufacture the noncentrosymmetric organization of the molecules. Therefore, a mass of chiral AIEgens are most likely to manifest the second-order NLO behaviors. Besides, the chiral AIEgens are also the hot spots in the CPL, which is tightly bound to the optoelectrical devices, threedimensional (3D) display, and information storage. [140,141] The superior CPL not only need the chirality but also require the bright emission at the solid state, and the chiral AIEgens with large g em can stimulate the marked CPL signals assisted by the AIE effect, reflecting the aggregation-induced CPL. [41] Jiang et al. reported the generation of CPL using the natural chiral DNA templates. [142] The achiral carbazolebased biscyanine fluorophores were designed exquisitely, and could be assembled with the DNA chains. The coassembly with DNA templates restricted the intramolecular rotational of the biscyanine fluorophores, sparking the enhanced emission. Meanwhile, the chirality of DNAs transferred to the DNA-biscyanine complexes, resulting in the CPL activity. Shang et al. realized the multicolor CPL in the single AIE system. [143] They linked the achiral AIEgens with the chiral cholesterol group by the ester bond. The obtained molecule Chol-CN-Py could assemble into the nanohelix and form the gel (g em = −3.0 × 10 -2 ) and xerogel (g em = −1.7 × 10 -2 ) films. In the protonation of Chol-CN-Py, the initial blue CPL could display the obvious bathochromic shift ranging from the green to orange while the g em still remained constant. Third-harmonic generation (THG) THG and other odd-order NLO processes do not suffer from the symmetry constraint. [144] Analogue to SHG, THG has been applied in up-conversion fluorescence, [145] drug delivery, and imaging. [146] THG is beneficial to deep-tissue and high-resolution bioimaging since the intensity of THG rise with the cube of the excitation power. [147] A plethora of representative inorganic materials, e.g., transition metal dichalcogenides (TMDs), [148] black phosphorus (BP), [149] germanium selenide (GaSe), [150] and perovskites, [151] have revealed striking third-order susceptibilities for THG process. Latterly, some AIEgens also signify the THG behaviors, gifted to be the ideal alternative for inorganic materials owing to their better flexibility [152] and biocompatibility. [153] Zheng et al. designed an AIEgen DCCN with merging the DPA and dicyanomethylene-benzopyran unit via Knoevenagel condensation. [154] In the solution of acetone/water mixtures, DCCN displayed the obvious AIE characteristic. In the crystalline state, it also unfolded the CIE feature with NIR fluorescence. The nanocrystals attained from aqueous solution were encapsulated to yield crystalline dots (CDs), and the impressive THG responses centered at 520 nm of CDs were recorded under the excitation of a 1560 nm fs laser. THG signals of the corresponding amorphous dots (ADs) and dilute solution were also inspected under the same condition, and 366-fold and 1183-fold higher signals were detected in ADs and CDs than solution state, respectively. The transformation of aggregation state also touched off the alteration of the inherent NLO property, referred to as the aggregationinduced nonlinear optical (AINLO) effects. The strong THG effects springing form strong push-pull dipolar moment and the plenteous π-conjugation of DCCN were employed for the clear and ultradeep imaging of the mouse cerebral vasculature and complex vessels. As mentioned, incorporating donor and acceptor components into the AIE backbone structure is an effective pathway to realize NLO effects [155,156] since this construction is vulnerable to the polarity of the surroundings. For the THG active AIEgens, the prototypical nonplanar DPA or triphenylamine (TPA) are adopted as strong donor along with α-cyanostilbene acting as strong acceptor, forming the patterns like D-π-A, D-A-D. Tian et al. synthetized the AIEgen 1,1-dicyano-2-phenyl-2-(4diphenylamino)phenylethylene, referred to as DCPE-TPA based on a D-π-A skeleton with a deep-red emission. [157] This luminogen could generate three various aggregations relying on the packing model, showing polymorphismdependent luminescence. The DCPE-TPA NPs fabricated by encapsulating the DCPE-TPA molecules within polymer produced marked THG signal at 517 nm under 1550 nm laser excitation. Similarly, Wang et al. explored the in vivo imaging of the intravital brain vasculature grounded on the TPATCN NPs, and the AIE essence of this D-A-D molecule was verified via THF/H 2 O system. [158] To achieve the deeper penetration depth, the multichannel NLO imaging covering both the THG and three-photon fluorescence (3PF) signals was operated. The responses of THG channel (495-540 nm) circumfused 517 nm. In view of the great coherence of THG signals, it was much convenient to track the flowing of NPs in blood vessels from THG imaging. Meanwhile, the 3PF images of blood vessels were consistent with the THG ones. Based on the original D-A-D structure, Qin et al. attached the tertbutyl (t-Bu) groups to the terminal benzene rings of DPA, impeding the generation of strong π-π stacking interactions (Figure 7(A)). [159] The obtained AIEgen, referred to as BTF, expressed the ultrabright far-red/near-infrared emission with quantum yields of 42.6%. The readily fabricated BTF dots via nanoprecipitation manifested the distinguishing THG peak at 517 nm and intense 3PF at 650 nm. The intrinsic THG signals could be detected at various penetration depths from 0 to 900 μm, offering additional structural information at superficial depths. TPE is one of the most iconic AIE units, and the incorporation of TPE can not only reinforce the nonplanarity of molecule, but also extend the π-conjugation, aiming to form and foster the NLO effects for luminogens. Based on the 2-(2,6-bis((E)-4-(diphenylamino)styryl)-4Hpyran-4-ylidene)malononitrile (TPA-DCM), Nicol et al. manufactured the aniline derivate TPE-TETRAD with TPE groups. Stimulated by a femtosecond laser at 1560 nm, the spike of THG occurred at 515 nm. [160] The concurrent three-photon luminescence (3PL) was spotted at 668 nm, speculated that it was interrelated with the reabsorption of THG photons by TPE-TETRAD molecules. Qian et al. modified the 2,3-bis(4-(diphenylamino)phenyl)fumaronitrile with TPE moieties to generate the typical Dπ-A-π-D architecture 2,3-bis(4-(phenyl(4-(1,2,2triphenylvinyl)phenyl)amino)phenyl)fumaronitrile (TTF) (Figure 7(D)). [161] The associated explorations concerning the TTF focused on three different systems: chloroform/toluene solution, aqueous solution, and solid state. In the chloroform/toluene mixture, only 3PL and 4PL (four-photon luminescence) could be detected. However, it engendered the TTF nanoaggregates in the aqueous solution, and the observed THG intensity was boosted with the aggregation degree of TTF molecules increasing. As it was transformed into the solid state, the THG signals altered in pace with the wavelength of laser varying: while the laser excitation converted from 1320 to 1560 nm, the THG responses in spectrum was narrow (Figure 7(f)); when the wavelength of laser shifted from 1620 to 1860 nm, the THG signal peaks turn into the wide ones (Figure 7(G)). Two-photon fluorescence (2PF) In 1990, Webb et al. invented two-photon fluorescence microscopy (2PM), [162] and henceforth, this technology started to be exploited and capitalized on further. In comparison with the traditional one-photon fluorescence, the energies of absorbed photons in 2PF process are lower, and their excitation wavelengths concentrated on the near-infrared region rather than the visible or UV region, diminishing the autofluorescence severely. Furthermore, 2PF produces less photodamage to the specimens and display the deeper tissue penetration. [163,164] The 2PA action cross section (ησ 2 ), a quantitative criterion to estimate the two-photon-induced threshold, [165] could have been hoisted by the reasonable structure design. Diverse aggregates of luminogens, such as cocrystal, [166,167] NPs, polymers, have been demonstrated to hold the AIE and 2PF features synchronously. Besides, the boosted ησ 2 has also been realized upon the molecular aggregates owing to the higher quantum efficiency (η) of AIE effect. [168] Imaging [17] and fluorescence probe [169] are the two chief applications of the two-photon AIE molecules. Li et al. presented to the conversion from the ACQ to AIE via adjusting the molecular packing mode through regioisomerization. The basic framework of the molecules consisted of the Dithieno[2,3-a:3′,2′-c]benzo[i]phenazine (TBP) and the twisted molecular rotor TPA. [170] From the [159] (D-G) Reproduced with permission: Copyright 2015, WILEY-VCH. [161] (H and I) Reproduced with permission: Copyright 2020, WILEY-VCH. [170] (J and K) Reproduced with permission: Copyright 2018, WILEY-VCH [173] TBP-e-TPA molecule with long-range cofacial packing mode to the TBP-b-TPA with discrete cross packing mode (Figure 7(H)), the minor modification of the positions of TBP units arose the switch from ACQ to AIE due to the generation of steric hindrance. The AIE active TBP-b-TPA also expressed the marvelous 2PA property (Figure 7(I)). In the NP state, it was noticed that the reinforced 2PF response in comparison with that in THF. From the 2PF spectra of TBPb-TPA, its 2PA cross section σ 2 was measured to be 608 ± 9 GM and 207 ± 7 GM (1 GM = 10 -50 cm 4 s photon -1 ) for NIR-I and NIR-II regions, respectively. The TBP-b-TPA NPs were executed to visualize the vascular architecture of the mouse brain, conveying the clear vasculature information even at 700 μm. Varying from the generally elaborate synthetic route to two-photon AIEgens, Niu et al. utilized an atom-economic and handy synthetic method to attain the AIE-active acrylonitrile derivatives. [17] Two different acrylonitriles TPAT-AN-XF and 2TPAT-AN exposed the dissimilar AIEE effect in aqueous suspensions where the several-fold fluorescence enhancement of TPAT-AN-XF was attributed to the denser aggregate than 2TPAT-AN. In view of the favorable ICT effect as well as the beneficial π-conjunction structure, the two AIEgens both illustrated marked 2PF signals between 800 and 980 nm with the femtosecond pulsed laser excitation. The ησ 2 of 2TPAT-AN was higher than TPAT-AN-XF in the area of 800-880 nm, and the σ 2 of 2TPAT-AN and TPAT-AN-XF were calculated to be 508 GM and 366 GM at 880 nm, respectively. The vitro imaging implied that 2TPAT-AN NPs retained the high biocompatibility, great resolution, and deep tissue penetration, and meanwhile it could stain lysosome in live cells selectively. Wang et al. designed the highly bright fluorophore BTPETQ for the intravital 2PF imaging of the mouse brain and tumor vasculatures. [5] The uniform BTPETQ dots emerged in the process of nanoprecipitation, bearing the high quantum yield of 19% ± 1% in aqueous media. The 2PA cross section of BTPETQ dots was evidenced to be as large as 7.63 × 10 4 GM at 1200 nm, and the ultradeep tissue penetration of 924 μm was observed at the assistance of NIR-II excitation. Moreover, the AIE dots showcased the obviously distinct 2PF in the tumor vasculatures and normal blood vessels resulting from the aggregation of dots in the unique leaky structures of the tortuous tumor blood vessels. 2PF probes are the promising tools to identify the ions or small molecules, [171] and even to sense the fluctuations of the surroundings. [172] Li et al. reported a conjugated macrocycle polymer P[5]-TPE-CMP composed of TPE and pillararene with the strong 2PF property and gratifying stability against photobleaching. [173] The introduction of pillararene integrated the virtues of solid porous polymer and macrocyclic host, e.g., the recyclability and indissolubility in the common solvents. Under the excitation of both near-UV and NIR realms, the P[5]-TPE-CMP acted as a talented sensor (Figure 7(J)). When various ions such as Na + , Ca 2+ , F -, Cl -, Fe 2+ , Ce 3+ , Y 3+ were added into P[5]-TPE-CMP suspension, no noticeable changes in 2PF were discovered. However, the Fe 3+ ion could spark the prominent quenching of 2PF (92.9%) on account of the size-matching effect of P [5]-TPE-CMP. Similarly, the carcinogenic organic dye 4-amino azobenzene could also touch off the distinguished 2PF quenching (98.5%) selectively, and it remained the great repeatability after washing with water for several times followed by centrifugation. Zhang et al. devised a brand novel AIEgen MTPA-Cy through condensation reaction with 4-(bis(4-methoxyphenyl)amino)benzaldehyde and 3ethyl-1,1,2-trimethyl-1H-indolium iodide. [174] As a result of the TPA group, MTPA-Cy emitted weak fluorescence upon the 365 nm radiation, and the intrinsic color of MTPA-Cy solution faded and the weak fluorescence was replaced by the bright yellow-green fluorescence at 514 nm after the original ethylene bridge was oxidated by ClO -. Under the 730 nm excitation, it manifested the 2PF behaviors with the cross section of 15.3 GM, and the 2PF intensity was gradually raised with the concentration of ClOascended from 0 to 50 μM, revealing the astounding potential to be the 2PF probe. Apart from 2PF, the 3PF pertaining to the fourth-order NLO effects also reside in the piles of AIE materials. [175] Compared to 2PF, 3PF will lessen the out-of-focus excitation efficiently and dramatically elevated the signal-tobackground ratio to the higher order levels. [176,177] Fang et al. tested and verified two new three-photon absorption (3PA) AIEgens Tpy and Tpy-Hex for detecting the trace amounts of silver ions quantitatively in organism. [178] The 3PA coefficient of Tpy-Hex could reach to 1.06 × 10 -22 cm 3 /W 2 with 3PA cross section σ 3 of 4.1 × 10 -78 cm 6 s 2 photon -2 , and the 3PF intensity also climbed with formation of aggregate. The Tpy-Hex contained two main constituents TPE and pyridine. The pyridine moiety could coordinate with Ag + easily, leading to the unexpected optical properties. In the aqueous solution of Tpy-Hex, it displayed significantly strengthened fluorescence at 435 nm in the presence of Ag + . Besides, the combination of the Tpy-Hex and Ag + (denoted as Tpy-Hex-Ag + ) upgraded the initial NLO activity of Tpy-Hex, holding the higher 3PA cross section σ 3 (6.9 × 10 -78 cm 6 s 2 photon -2 ) and 3PA coefficient (2.21 × 10 -22 cm 3 /W 2 ). Zong et al. incorporated the large isolation groups on the perylene diimide (PDI) core to realize the conversion from ACQ to AIE via suppressing the π-π stacking. Under the excitation of a 1550 nm fs laser, the bright 3PF was discovered at 650 nm. The DCzPDI (PDI with double carbazolyl moieties) NPs could offer the high-resolution brain blood vessels at varied vertical depths, and the maximum penetration depth could attain to 450 μm. The restricted intramolecular rotation (RIR) mechanism of the AIE materials will also affect their NLO responses. Peng et al. prepared a new D-π-A compound 5,6-di(4-N,N-(dimethylamino)phenyl)pyrido [2,3-b]pyrazine (APPP) with the propeller-like construction. [122] The AIEE effect of APPP in the water-dominated mixture could be affirmed by the fluorescence spectra. APPP also demonstrated the viscosityinduced emission characteristic in the glycerin/EtOH solution owing to the conversion to the TICT state from the RIR. Expect for the outstanding AIE phenomena, the unique thirdorder NLO features were also demonstrated by open-aperture Z-scan technique. It was uncovered that APPP possessed the reverse saturable absorption (RSA) property in EtOH solution, yet it switched into the converse SA at the state of solid. Astonishingly, the conversion between RSA and SA could also come true as the viscosity changed, and the NLO absorption coefficients (β eff ) of APPP in pure EtOH and the glycerin/EtOH mixture (v/v = 9:1) were measured to be 0.54 × 10 -11 m/W and −2.95 × 10 -11 m/W, respectively. Moreover, the ascension of temperature also could shift the state of RSA and SA. NLO APPLICATIONS OF AIE MATERIALS AIE materials, especially those with D-A building blocks, are endowed with fortes involving large NLO coefficients, ultrafast NLO responses, fortified optical parameters, less thermal dissipations, high chemical and physical stability. These excellences enable the NLO-active AIE materials as the satisfactory platform to practice NLO applications. The alterations of aggregates draw forth the mutations of intermolecular interactions, [183] packing modes, [184] or the transition dipole moments, and then conquer the field of regulating the NLO properties of AIE materials. From the concentrated solutions to the polymers, crystals, NPs, gels, clusters, and so forth, the distinct aggregates of AIEgens may trigger the disparate NLO effects or the modifications of NLO strength, which are adopted to the different NLO domains. In this section, we will discuss the NLO applications of AIE materials in optical limiting, solid-state laser (SSL), and bioimaging. Optical limiting The emergence of optical limiting is grounded on the process of RSA or excited state absorption. [185] When the laser illuminates the optical medium with optical limiting, the high energy radiation can be filtered out whereas the low-intensity light can pass through the medium effortlessly. Thereby, the optical limiters can not only effectively protect human eyes or sensitive optical devices from the photodamage, but also have the capability of tuning the laser. Chan [186] (C and D) Reproduced with permission: Copyright 2012, Royal Society of Chemistry. [187] (E and F) Reproduced with permission: Copyright 2017, Elsevier. [188] (G) Reproduced with permission: Copyright 2019, American Chemical Society. [201] (H and I) Reproduced with permission: Copyright 2019, WILEY-VCH. [202] (J) Reproduced with permission: Copyright 2020, Royal Society of Chemistry. [203] (K and L) Reproduced with permission: Copyright 2016, Royal Society of Chemistry. [205] (M and N) Reproduced with permission: Copyright 2019, WILEY-VCH. [206] (O and P) Reproduced with permission: Copyright 2020, WILEY-VCH [207] or C 6 H 5 ] via polymerizations. [186] The resultant polymers P1 (R = C 8 H 17 ) and P2 (R = C 6 H 5 ) revealed the AIE (Figure 8(A)) and ACQ effect, respectively, indicating the impact of variation of molecular structure. In its solution of chloroform, transmitted fluence of the two polymers rose almost linearly with the enhancement of input fluence when the fluence was lower than 1 J/cm 2 . As the fluence outstepped 4 J/cm 2 , the transmitted fluence reached a plateau and saturated state, elucidating the optical limiting nature of P1 and P2 polymers (Figure 8(B)). The diyne polymers like graphdiyne have been verified with unexpected NLO absorption, which may therefore qualify the optical limiting property. Hu et al. constructed the hyperbranched diyne polymer with TPE via the polycy-clotrimerization under the catalysis of TaBr 5 . [187] The hyperbranched poly(tetraphenylethene) hb-P1 could keep steady at the high temperature of over 400 • C and are highly soluble in the majority of organic solvents by virtue of the twisted conformation of TPE. The polymer hb-P1 inherited the impressive AIE feature from the TPE. Its THF solution emitted faint fluorescence, while its aggregates in THF/H 2 O mixture or the solid both gave out intensive fluorescence with the maximal quantum yields of 81%. The optical limiting characteristic of hb-P1 was inspected in the homologous THF solution, and the optical pulse at 532 nm could transport in the THF solution at the low incident fluence. Once the incident fluence outstripped 60 J/cm 2 , the transmitted fluence started to deviate from linearity growth. Other than the AIE polymers with high molecular weight, some AIE-active small molecules have been also explored as the potential candidates of optical limiter (Figure 8(D)). Liu et al. have developed a series of Pt(II) complexes bearing difluoro-boron-dipyrromethene (Bodipy) acetylide ligands and the diverse 2,2′-bipyridyl derivate ligands. [188] The three disparate Pt(II) complexes, named as Pt-1, Pt-2, and Pt-3, have been explored for their photophysical properties. The Pt-1, Pt-2, and Pt-3 were gradually clustered with the ratio of water increased in CH 3 CN solutions with much brighter light synchronously, suggesting typical AIE characteristics. These polymers had the positive absorptions ranging from the 380 to 460 nm and after 530 nm, implying the possibility of RSA. The associated optical power limiting examinations were conducted by nanosecond laser pulser at 532 nm, and the outstanding output energy attenuation as the high-density incident imported, confirming the occurrence of optical limiting. Among the three polymers, Pt-2 demonstrated the optimal capability of optical limiting (Figure 8(F)). Organic SSL Laser is one of the indispensable elements in the NLO domains, [189,190] and the sustainable, jarless, and adjustable laser resources are the fundamentals of yielding and characterizing the diverse NLO effects. [191] For the moment, the solid-SSLs recline principally on the inorganic materials, such as TMDs, [192] perovskites, [193] MXene. [194] Although the lasing technologies counting on the inorganic materials have been growing maturity, some drawbacks still exist, containing the hard preparation, high cost, and tough tunability. [195] The organic lasers emerge with high robustness, [196] handy, time-saving and economical solution-processed fabrications, and admirable optical performance, grabbing the tremendous attention. [197] In addition, the manifold excited state processes can be fulfilled facilely among most of the organic luminogens. [198,199] Howbeit, the ACQ effect of organic gain materials seriously encumber the wider and further their employments in SSL due to the upper lasing threshold and the forbidden of the laser actions. [200] Direct at this issue, one of the appropriate approaches is to incorporate the AIEgens to shrink the nonradiative loss. Wei et al. devised and synthesized the organic molecule 1,4-bis((E)-4-(1,2,2-triphenylvinyl)styryl)-2,5-dimethoxybenzene (TPDSB), unveiling the protruding AIE behavior as high as ∼2500 times augmented. [201] TPDSB microribbons were gained by the solution-drying method at ambient temperature, and the spatial resolved PL spectra proved the appealing optical waveguiding property of the microribbon with the low optical loss of 0.012 dB μm -1 . At the excitation of 355 nm nanosecond pulse laser with varied pump energy, the related PL spectra witnessed the transition from the spontaneous emission to the amplified spontaneous emission (ASE) and the lasing threshold was identified to be as low as 653 nJ cm -2 (Figure 8(G)). Meanwhile, the discovery of spatial interference pattern at both terminuses also declared the existence of Fabry-Pérot (F-P) microcavity with a quality factor of 2565 within the microribbon, suggesting that the TPDSB was an attractive active medium for laser action of 520 nm. Analogously, Liu et al. realized the microlaser by adopting a typical TPE-modified AIEgen TPE-BODIPY. [202] In contrast to the organic laser with high-crystallinity crystals, this microlaser stemmed from the noncrystalline coaggregation of TPE-BODIPY and epoxy resin. The aggregation of TPE-BODIPY in THF/H 2 O system appeared with sevenfold enhancement of the fluorescence quantum yield. With the help of surface tension effects, the coaggregation of epoxy resin and TPE-BODIPY could be fashioned into noncrystalline microspheres, and the suitable addition of water and surfactant could guarantee the appropriate size of microspheres. The constructed microspheres were excited by the nanosecond laser (480 nm, 5 ns duration, 20 Hz), and the shining outer boundary could be found, symbolizing the formation of whispering gallery mode (WGM) above the lasing threshold (Figure 8(H)). From the lasing spectra of microlaser with distinct diameters, the singlemode lasing at 538.2 nm for 8.9 μm was detected accompanied with the low threshold of 2.24 mJ cm -2 and the full width at half maximum (FWHM) of 0.2 nm (Figure 8(I)). Apart from the TPE containing SSL, some new-style organic lasers also spring up. Lv et al. constructed a highly emissive AIEgen BPMT with the building block of TPA and benzo[c] [1,2,5]thiadiazole (BTA). [203] The acquired rod-like crystals could achieve the high PLQY of 48.7% with the fluorescence spanning the deep-red and NIR region. Besides, the crystals were susceptible to the alteration of the pressure, and the reversible decline of brightness along with the redshift of fluorescence wavelengths occurred when the pressure converted from 1 atm to 5.1 GPa. Dissolving the BPMT and epoxy resin with dichloromethane, and the BPMT-doped hemispherical microresonators assembled spontaneously on a distributed Bragg reflector (DBR) mirror surface (Figure 8(J)). Excited under the 10 × objective (NA = 0.3) system by 351 nm nanosecond laser, the fluorescence microscope images presented the shining outer boundary subordinated to the WGM microresonator for the pump power exceeding the lasing threshold of 22.3 kW cm -2 . At the lasing wavelength of 735.2 nm with the line width of 0.23 nm, the quality factor of WGM microcavity was determined to be 3200. Aside from the complicated π-conjugated structures, [204] the AIE materials with the simple π-conjugated structures may also actualize the efficient laser. Tang et al. studied the optical properties of a series of (2-hydroxyphenyl)propanone derivatives with the oversimplified scaffolds. [205] All of the four derivatives including single-benzene emitted poorly in the solutions but displayed bright emission in the film and crystals. The four compounds could foster the high-quality crystals through the ordinary solvent diffusion, and the organic crystals had the similar and strong absorption peak at around 365 nm. They gave out the intensive green fluorescence with the high quantum yields of 0.72-0.84, suggesting the CIE feature. Additionally, the tip parts of these needlelike crystals showcased much more brilliant emission than their bodies, implying the self-waveguided properties. The PL spectrum of the one of the crystals collected from the edge side verified the ASE by the excitation of pulsed laser, revealing the lasing property of the simple AIE materials. Bioimaging and photodynamic therapy (PDT) AS expounded, bioimaging is one of the most vital and practical NLO implementations touching upon the AIE materials. Extraordinarily higher resolution and larger signal-to-noise ratio, improved penetration depth and lower photodamage to the biological cells and tissues incline it to be the promising alternate to the convenient single-photon bioimaging technology. Wang et al. attained the 2PF composites by complexation of the AIEgen TPEPy with the biologic fetal bovine serum (FBS). [206] The intensity of TPEPy fluorescence acquired sixfold reinforcement with the 10% FBS aqueous solution, and the gained TPEPy-FBS was endued with the great biocompatibility, photostability, and low phototoxicity and cytotoxicity. More significantly, TPEPy-FBS behaved the fetching 2PF in the area of far-red and NIR upon the excitation of laser, which was fit for the in vivo bioimaging. The suspension of TPEPy-FBS nanocomposites was injected into the mouse brain model with craniotomy. The 840 nm femtosecond laser was applied for the 2PF imaging owing to the largest σ 2 value, and the clear and bright 2PF images at diverse tissue depths could be obtained with the largest depth of 656 μm (Figure 8(N)). The detailed information about the big blood vessels and small blood capillary vessels could be distinguished in the 2PF imaging, and the signal-to-background ratios of the 2PF images were measured to be as high as 234. The majority of 2PF bioimaging mainly involves the widerange biological organs or tissues, but less attention has been paid to the cell organelles. Alam et al. put forward two zwitterionic AIEgens CDPP-3SO 3 , CDPP-4SO 3 with the sulfonate function group as organelle-targeting fluorophores. [207] The core molecules of CDPP-3SO 3 and CDPP-4SO 3 embraced the D-π-pyridine architecture, in which the pyridine moiety could be transformed into pyridinium, intensifying the initial push-pull effect (Figure 8(O)). Therefore, the two compounds were ideal for bioimaging. The special pyridiniumsulfonate group of the two AIEgens offered the possibility to allow for the endoplasmic reticulum (ER) targeting selectively as a result of ample existence of phosphocholine cytidylyltransferase (CCT) on the ER membrane. Clear signals of HeLa cells could be obtained for the three AIEgens from the two-photon microscope (Figure 8(P)). Some AIE molecules are also propitious to 3PF since the AIE chromophores will amplify the 3PA cross sections at the aggregation. [175] Compared to the complex synthesis method in organics above, Feng et al. synthesized the novel thiophene terpyridine zinc complex (DZ1) to be awarded with remarkable 3PA property and AIE activities alongside the conspicuous response to RNA, totally varying from its organic ligand (DL1) without obvious NLO effect. [208] From the three-photon excitation fluorescence method under 1700 nm fs laser, the 3PA cross section σ 3 was determined to be 5.28 × 10 -82 cm 6 S 2 photon -2 . The σ 3 of DZ1 in aggregation (ethyl acetate/acetonitrile (70%)) was measured to be 1.11 × 10 -81 cm 6 S 2 photon -2 , rising 1.78-time higher than the molecular state. DZ1 was feasible to mobilize in the organisms by occasion of its low toxicity, wonderful chemical stability and admirable biocompatibility. The imaging experiments of DZ1 was operated in the HepG2 cell. Monitoring the motivations of DZ1 in the cells found that it could pass through the cell membrane and then target mitochondria like MitoTracker (a green fluorescent protein). The open-aperture Z-scanning and 3PF was performed to explore the variability of NLO properties in the case of the addition of RNA, and the σ 3 value move up to be 1.17 × 10 -81 cm 6 S 2 photon -2 (1.92-fold) because of the interaction between DZ1 and RNA. PDT is a noninvasive strategy to fight the tormenting caners or tumor, in which the cytotoxic reactive oxygen species (ROS), [209] such as singlet oxygen ( 1 O 2 ), [210] from photosensitizer (PS) under the focused light will induce the apoptosis of cancer cells directly [211] or indirectly. [212] However, the ineffective generation of ROS, low tissue penetration, poor selectivity, and biocompatibility of common molecules hinder its application in biomedicine. [213] Some AIEgens with multiple-photon characteristics are recognized as the ideal alternatives, which can not only improve the extinction coefficients but also increase the light penetration in the therapeutic process. [214,215] Wang et al. exploited an AIE PS TQ-BTPE composed of [1,2,5]thiadiazolo [3,4-g]quinoxaline (TQ) for strong acceptor and TPE for the electron donor. [101] The rational design of this PS contributed to the 2PA upon NIR-II femtosecond laser excitation, red-shift and broad absorption in visible region. The σ 2 of TQ-BTPE was measured to be 168 GM at 920 nm and 49 GM at 1200 nm, revealing the remarkable two-photon activity. For the aggregate of TQ-BTPE in aqueous medium, it could engender much more 1 O 2 than the classical chlorin e 6 (Ce6) even under the general visible light excitation. Comparing with the radiation of NIR-I, the NIR-II light could induce the better ablation of HeLa cells in fresh pork tissue on occasion of the deeper penetration capability in the tissues. Although some AIE PSs are capable to produce the ample ROS to kill the cancerous cells, they also present the high phototoxicity to the normal tissues. In order to settle the dilemma, some PS systems bearing the prominent selectivity to tumors have been exploited. Yang et al. encapsulated the typical PS bis(pyrene) (BP) with the liposomes to form the BP@liposomes complex. The BP was selected as PS for its considerable high σ 2 of 2.4 × 10 5 GM, and the coassembly of BP with liposomes elevated the accuracy of PDT. While the BP@liposomes complex was in touch with the normal tissues, the BP molecules still maintained the pristine dispersed state, appearing the negligible phototoxicity. As it reached tumor sites with passive and active mechanism, the liposomes were gradually decomposed by intracellular phospholipase, and the dispersed state of BP was converted into the aggregate state, leading to the eradication of tumor under the two-photon laser radiation. Apart from the challenge of target selectivity in two-photon PDT processes, the residual of PSs in vivo is also a hassle. Li et al. fashioned the AIE NPs 4-(5-(1-(4-(tert-Butyl)phenyl)-1Hphenanthro [9,10d]imidazol-2-yl)-thiophen-2-yl)-7-(4-(1,2,2triphenylvinyl)phenyl)benzo[c] [1,2,5]thiadiazole (TPE-PTB) via lipid-encapsulated method. The TPE-PTB exhibited the high quantum yield of 23% and the remarkable 2PA cross section of 560 GM (excitation at 800 nm). Benefiting from the penetration depth (500 μm) in tumor tissue, the TPE-PTB NPs could release the two ROS (O 2 and hydroxyl radicals) simultaneously under the NIR laser. More importantly, the PS could be eliminated naturally after five days of injection, dramatically reducing the side effect to the body. CONCLUSIONS AND PERSPECTIVES The emergence of AIEgens refreshes the cognition of the orthodox fluorescent materials. From the primitively distressful ACQ to the presently exciting AIE, the countless branches of AIE families involving HPS, TPE, Schiff-bases, fluorenone, and organometallics have sprung up over the past few decades. In the regimes of linear optics and optoelectronics, the numerous AIE materials yield the brilliant achievements and close concern. Meanwhile, the AIE materials start to be budding and evolve in the NLO fields, and the scads of AIEgens have been witnessed to share the impressive NLO effects, e.g., SHG attributed to the second-order NLOs, THG, 2PA, 2PL, SA, and RSA for the third-and higher order NLOs. Currently, the multiphoton fluorescence of AIE materials has been functioned into the bioimaging, biosensing, and biotherapy widely and maturely. Aside, the SA and RSA effects of AIEgens also have been progressively exerted in the laser technologies. Amid these researches dealing with the NLOs of AIE materials, nearly all of them have been aware of the aggregation-induced enhanced NLOs and even triggering the switches between the varied NLOs. On the side of AIE, some mature and sound theories, involving the RIR, RIM (restriction of intramolecular motion), RIV (restriction of intramolecular vibration), ICT, and J-aggregation, and so forth, have explained the AIE behaviors satisfactorily and guided the devises of novel AIE compounds. However, there is still no well-established theoretical system to analyze and explicate the alterations of NLO rooting from the aggregation. Moreover, the researches about the NLOs in AIEgens are still in their infancy. The sterner challenge is how to develop and exploit more AIEgens suitable for NLOs. A mass of NLO features in developed AIE materials have not yet been uncovered, and there is ample scope for AIE materials utilizing in NLO domains. For instance, numerous AIEgens can crystalize in the noncentrosymmetrical space group, which are potential to the even-order NLOs such as SHG, SFG, and optical rectification. Moreover, these AIE compounds embodying the conspicuous push-pull structures also tend to the generation of molecular dipole, having the latent capacity in the appreciable NLO responses. In sum, the flourish of AIE materials provide a favorable moment for the developments of NLO, and the more flexible and stable AIE materials with optimized NLO responses are prospected to be employed into the photonics and optical devices. A C K N O W L E D G M E N T S Financial supports from National Natural Science Foundation of China (project numbers 21773168, 21531005, and 91622111) are gratefully acknowledged.
14,063
2021-02-02T00:00:00.000
[ "Materials Science", "Physics" ]
Contactless graphene conductivity mapping on a wide range of substrates with terahertz time-domain reflection spectroscopy We demonstrate how terahertz time-domain spectroscopy (THz-TDS) operating in reflection geometry can be used for quantitative conductivity mapping of large area chemical vapour deposited graphene films on sapphire, silicon dioxide/silicon and germanium. We validate the technique against measurements performed with previously established conventional transmission based THz-TDS and are able to resolve conductivity changes in response to induced back-gate voltages. Compared to the transmission geometry, measurement in reflection mode requires careful alignment and complex analysis, but circumvents the need of a terahertz transparent substrate, potentially enabling fast, contactless, in-line characterisation of graphene films on non-insulating substrates such as germanium. The development of scalable integrated manufacturing pathways for graphene is crucial to all its emerging applications and industrial development 1 . Chemical vapour deposition (CVD) has become the dominating technique to synthesise large area "electronic-quality" graphene 2 , with the size of single mono-layer graphene crystals now on the cm-scale 3 and continuous films now routinely produced roll-to-roll or at a size just limited by the reactor 4 . In fact, progress in growth reached a level where detailed, adequate characterisation over such large areas has become a key challenge. Prevailing electrical characterisation for instance is based on the fabrication of field-effect or Hall bar devices typically combined with Raman spectroscopy. This is time-consuming for large samples and for statistically relevant sample numbers in particular also as the as-grown graphene typically has to be transferred away from the growth substrate. Among a range of emerging contactless characterisation methods, terahertz time-domain spectroscopy (THz-TDS) operating in transmission geometry has been demonstrated to allow the direct, accurate mapping of graphene conductivity and mobility over large areas, producing data consistent with the Drude model to describe graphene intra-band transitions [5][6][7][8][9][10] . Graphene's complex conductivity is determined by the terahertz pulse transmitted through the graphene film relative to the support, and analysed using Fresnel coefficients where graphene is modelled as an infinitely thin conducting film. Drift and field-effect mobilities can then be extracted by fitting the conductivity spectra to the Drude model 9 , and measuring graphene conductivity changes as a function of the applied back-gate voltages on a gate-stack support 8 , respectively. By repeating the measurement and analysis across the entire graphene area, a conductivity or mobility map can be reliably produced. While these demonstrations could potentially enable a rapid in-line graphene monitoring and large-area characterisation, to date THz-TDS has been carried out in transmission mode that necessitates a terahertz transparent support. Here we overcome this restriction and demonstrate, as a proof-of-concept, the quantitative contactless measurement of the electrical conductivity of CVD graphene on a range of application relevant supports using THz-TDS operating in reflection geometry. Reflection based THz-TDS has previously been used to characterise optically dense materials where due to the high energy loss within the sample, transmission based measurement cannot be used [11][12][13][14][15] . As schematically outlined in Fig. 1, we first validate our measurements in reflection geometry by performing the more well-established terahertz transmission conductivity mapping 5, 7, 10 on the same graphene sample. For this we use CVD grown graphene synthesised on commercial Cu foils and transferred to a sapphire substrate, which has the required transparency in the THz frequencies. We then show that by sample back-gating, our proposed method can be used to resolve graphene conductivity changes and hence to directly determine the graphene mobility on a p-doped Si substrate with a 300 nm thick SiO 2 layer, which is one of the most commonly used substrates for graphene device manufacturing. To illustrate that this technique has potential as a tool for in-line graphene quality monitoring, we show that the graphene conductivity can also be directly mapped on a substrate like Ge, which has a considerable number of intrinsic carriers at room temperature (42 Ω·cm) and exhibits a Drude like carrier absorption 16 . In particular we demonstrate THz-TDS mapping for graphene that has been directly grown on Ge. Methods Graphene growth and transfer. As reference process and material, we use well-established graphene CVD on commercial Cu foil and subsequent PMMA transfer, which is widely used in literature 17 . We use standard Cu foils (25 μm thick, Alfa Aesar purity 99.8%) and CH 4 as the carbon precursor 18 . For transfer, PMMA (poly methyl-methacrylate) was used as support, followed by FeCl 3 chemical etching to remove the Cu. As target substrates we used sapphire (430 μm thick), Si/SiO 2 wafer (300 nm/525 ± 25 μm thick), and intrinsic single crystalline Ge (110) wafer (50 Ω·cm, from MTI Corporation). Raman spectroscopy was performed using a 532 nm laser for characterising the transferred graphene. The Si wafer support was boron-doped (100 Ω·cm) to allow back-gating. The process conditions for graphene CVD directly on Ge were similar to recent reports in literature 19 and we used Ge (110) wafer substrates and a 1:52 precursor gas mixture of CH 4 /H 2 at a growth temperature of 920 °C in a Aixtron Black Magic cold wall CVD reactor. Terahertz time-domain reflection spectroscopy. Reflection based THz-TDS experiments were conducted with a Terahertz Pulsed Imaging (TPI) Imaga 2000 system (TeraView, Cambridge, UK), as schematically shown in Fig. 1. The terahertz radiation used here is broadband, covering a spectral range of 0.15-3 THz in free-space. Terahertz radiation is generated by pumping a biased photoconductive antenna with an ultrashort laser pulse from a Ti:Sapphire laser. The emitted terahertz pulse is collected, collimated, and then focused onto the sample with a focal length of 7 mm at an incident angle of 30°. The reflected terahertz pulse is then collected and focused onto an unbiased photoconductive antenna for the laser-gated terahertz detection. The TPI achieves a spatial resolution of approximately 400 μm at 1 THz 20 that in turn allows us to estimate the spot size of approximately 420 μm with Gaussian beam optics 21 . One of the main barriers for accurately extracting optical parameters in reflection geometry is the great sensitivity to any phase misalignment between the sample and reference measurements. In general, the phase misalignment can be mitigated by performing additional measurements with a slab of material of known optical constants 14 , maintaining same path length change for both terahertz and optical beam 15 and numerical correct phase correction using the maximum entropy method 22 . Attention was given to precise sample positioning by having both the Al mirror and the sample mounted on a motorised stage and positioning the respective front surfaces in order to ensure that the particular reflecting plane reflects the incident wave into the detector at maximum level 12 . It should also be noted that the substrate refractive index measured is not adversely affected by phase misalignment, as in the case for the extinction coefficient 13 . As an experimental check, the measured substrate refractive index for s-polarisation 23 was always compared against the literature values. The reflection coefficient depends on the polarisation of the incident terahertz wave and the angle of incidence. Given the substrate refractive index and using Tinkham's formulae to describe the effect of a thin conducting film 24 , the equation for obtaining conductivity from s-polarisation reflection measurements can be derived as: where  r is the Fourier transformed ratio of the reflected wave's complex electric field from the sample to the incident wave from the mirror, z 0 is the vacuum impedance (376.7 Ω), θ i and θ t are the incident and transmitted angles, respectively, and  n 1 and  n 2 are the complex refractive indices of air and substrate, respectively 25,26 . At normal incidence and assuming zero conductivity, we get the well-known Fresnel coefficient for the reflection at the interface: The analysis becomes simpler at normal incidence but such a reflection system in turn would require a beamsplitter that is not commonly implemented due to reflection losses 27 . , the complex conductivity can be expressed as We performed analytical simulations to double check the derived expressions and optical constants, whereby the simulations involved (1) simulating a terahertz pulse generated from a photoconductive antenna switch with realistic parameter settings such as 120 fs, 300 fs and 180 fs being the laser pulse duration, emitter/detector carrier recombination and collision time, respectively 28 ; (2) determining reflections from graphene and the substrate using the Fresnel coefficients, the real part of the materials refractive index in the literature 16 and a constant graphene conductivity value for CVD graphene grown on Cu foil 7, 10, 29 ; and (3) applying the derived expressions to obtain conductivity for comparison against the conductivity defined in step 2. The use of only the real part of the materials refractive index is justified by the use of substrate materials such as sapphire and intrinsic Ge that have a negligible extinction coefficient in the relevant frequency range between 0.5 to 1 THz 16 . The simulation highlighted that, where there is a phase shift deliberately introduced between the sample reflection and the reference reflection, the slope of the derived conductivity spectra is no longer zero (see Supporting Information). This fact is exploited in the automated analysis of our measurement for phase correction. It should be noted that phase misalignment can be due to several reasons, such as fibre drifts and mechanical jittering of the optical delay stage, and that these problems are most severe in fibre-coupled terahertz systems as used in this study. Our phase correction method therefore shifts the acquired reflection pulse with respect to the reference measurement in the time-domain in order for the real part of the calculated conductivity spectra to have a slope close to zero. A single step shift in time here corresponds to a sample being placed approximately 2.5 μm with respect to the reference mirror, and a positive value means that the sample position is shifted in the direction of the incident beam. We note that the phase compensation scheme can be alternatively implemented in the frequency domain by multiplying the phase shift term 6 . Transferred graphene on sapphire was measured with TPI at a step size of 200 μm, where 15 waveform traces were averaged to represent a measurement for one single pixel. Here the number of waveforms acquired for averaging is relatively small as would be the requirement for potential in-line applications. Before the measurement, however, terahertz reflection from the Al mirror placed nominally at the same position as the sample was used as reference measurement. An Al mirror generally works as an almost perfect reflector in the terahertz regime 30 . From the raster scanned measurement, the region of the substrate covered by graphene was isolated by masking the data with an intensity threshold value given that the graphene covered area corresponds to regions of higher reflectivity relative to the plain sapphire substrate. As the primary reflection was well separated from the first reflection in the time-domain, a time windowing function was used to process the acquired waveforms in the regions of interest in order to remove Fabry-Perot or etalon effects that would otherwise corrupt the conductivity measurement. A conductivity map was subsequently generated. In order to validate our proposed method, the same sample was scanned with THz-TDS operating in transmission mode. In particular, measurements were acquired with the Tera K15 T-Light (Menlo Systems GmbH, Germany) where the 60 mW pump pulse was focused to a 40 μm spot onto the terahertz photoconductive antenna, generating terahertz radiation with a beam diameter of approximately 1 mm at 1 THz. The sample was placed between the terahertz emitter and the detector at normal incidence without nitrogen purge, and data were acquired at an integration time of 10 ms at 200 μm step intervals. The integration time constant corresponded to an average of 700 waveform traces. The region of the substrate covered by graphene was again obtained by intensity masking, where graphene covered areas correspond to regions with a reduction in transmitted intensity due to the higher carrier absorption in graphene 10 . By performing the analysis detailed in recent literature 7, 10 , a comparative conductivity map for approximately the same sample area was generated. Data availability. Additional data sets related to this publication are available from the Cambridge University data repository at https://doi.org/10.17863/CAM.12742. Results and Discussion Graphene conductivity on sapphire substrate. Figure 2a-c shows the Raman graphene D/G ratio map, frequency distribution, and 2D/G ratio map of the transferred graphene on sapphire. Most measured points show a D/G ratio of roughly 10%, highlighting the presence of defects that have been introduced during the transfer procedure. The 2D/G ratio map shows an average value of more than 1 highlighting that the film is predominately monolayer graphene. The substrate refractive index measured in transmission mode TDS is 3.1, in close agreement with literature 16 . This value was then used to obtain flat conductivity spectra 7, 10, 29 (see Supporting Information) without any phase correction, which in turn was used to generate the transmission conductivity map shown in Fig. 2d. The imaginary conductivity measured here is no longer negligible due to the fact that phase was not accounted for. Graphene film conductivity measurements with THz-TDS in transmission geometry have previously been benchmarked against micro-four point probe, micro Raman spectroscopy and optical imaging 7 . Here we use transmission measurements to validate our measurements in reflection geometry. The substrate refractive index measured in reflection mode was approximately 3. The slight discrepancy may be due to sample surface related imperfections, leading to scattering losses and measurement with a focused beam as opposed to a collimated beam 31 . In a manner similar to the literature 7, 10, 29 and our transmission measurements, the graphene conductivity spectra on sapphire for a well-aligned reflection measurement have a real part characterised by a flat spectral response near to its DC value well below the Drude roll-off frequency or the inverse scattering time, while the imaginary part is close to zero (see Supporting Information). The spectrally resolved conductivity in turn can be represented by a single real number. It has been shown that phase misalignment would only significantly affect the imaginary part of the conductivity 6 and therefore emphasis is placed only on the real part of the conductivity. As the conductivity spectra are approximately constant over the spectral range, the representative conductivity is taken as the average between 0.6 to 0.9 THz because at higher frequencies, the transmission conductivity spectra (see Supporting Information) become affected by water vapour absorption under ambient conditions. This effect become visibly more pronounced with increasing optical path length. Similarly, for the conductivity measurement in reflection mode, the representative pixel conductivity is taken as the average over the same spectral range of 0.6 to 0.9 THz. , where (f) shows a spatially filtered map of (e) with a spot size 2.4 times greater. Conductivity histograms for transmission and reflection geometries are compared in (g) before and (h) after filtering. The colour of the histogram is darkened at the overlap between reflection and transmission measurements. Raman and terahertz mapping both resolve a similar shape of the transferred graphene film. Figure 2 compares the conductivity map and histogram obtained with terahertz transmission and reflection mode TDS, respectively. The conductivity map acquired in reflection mode contains intermittent interlacing artefacts between alternate rows on the image. This is due to the small signal fluctuations that propagate to the conductivity calculation. Sources of signal fluctuations include fibre drifts, laser instability, optical and electronic noise. Nevertheless, qualitatively, an agreement between the two measurement geometries can be seen, for instance by looking at the regions of low local conductivity. When comparing the histogram in Fig. 2g, the conductivity frequency distribution is generally in agreement despite differences in the spot sizes of the THz-TDS systems used. In order to allow for a comparison that accounts for the differences in terahertz spot size, spatial filtering was applied to the reflection conductivity map to emulate a spot size approximately 2.4 times greater in order to generate a spatially averaged conductivity map and histogram shown in Fig. 2f and h, respectively. This results in a much closer agreement against the transmission measurements. The low conductivity values in the histogram correspond to the pixels positioned close to the graphene boundary and the slightly higher conductivity in reflection compared to transmission may be due to the averaging of the aforementioned signal fluctuations. The demonstration of the reflection measurement here means that, in principle, conductivity can be reasonably estimated directly from the first transmitted pulse, recently demonstrated 6 , as opposed to the first echo in a transmission measurement, even though the measurement robustness is lower for direct transmission analysis 7, 10 . Overall, we were able to validate our proposed approach with good agreement against established transmission measurements. Graphene mobility. In order to demonstrate that our proposed method can be used to resolve graphene conductivity changes, we implemented simultaneous back-gating 19, 32 via a graphene film transferred to a p-doped Si substrate (Fig. 1, see methods). Unlike previous work where complex transistor and Hall bar devices were realised to measure the mobility 33, 34 , here we attached a piece of Cu foil to the Si side of the graphene/wafer-stack and another Cu foil to the edge of the graphene. This rather simplistic setup defined the electrodes for back-gating and grounding the graphene film, respectively. The Fermi level of graphene was electrically tuned via the back-gate voltage supplied by a DC variable voltage supply (Keithley, Model 2400) where various different DC voltages between −160 to 160 V were applied. The back-gate leakage current was negligible. At a fixed point on graphene and at every 20 V back-gate voltage increments, reflection measurements were acquired on the TPI at 15 waveforms per average. The acquired waveforms were then analysed and the sequence of steps were then repeated for 2 other randomly selected locations on the graphene. The measurements were performed in ambient conditions at room temperature. Figure 3a shows a selection of the reflected waveforms, and the expanded views on the signal near the peaks against the applied back-gate voltage. From Equation 1, it is expected that terahertz reflection increases with increasing graphene conductivity induced by electrical gating. The reflection changes were not due to the influence of the substrate as no changes to the waveforms were observed when the terahertz spot was on the substrate under similar gating voltages. At V g between 40 and 60 V, reflection is the weakest indicating that the Fermi energy at this gate voltage is closest to the Dirac point. At all other voltages below 40 V terahertz reflection increases monotonically with V g as shown in Fig. 3b. Based on the acquired terahertz reflection waveforms, the real part of the frequency dependent conductivity was determined, as shown in Fig. 3b. This shows a strong dependence on the applied gate voltage. It can also be observed that there is a slight increase in the conductivity with increasing frequency. This may be due to a small degree of preferential back-scattering of charge carriers in the graphene 8 . Due to the aforementioned reasons, the spectrally resolved conductivity is represented by a single real-value conductivity value taken as the average conductivity between 0.4-0.9 THz. Figure 3c shows the measured conductivity for 3 randomly selected points on the graphene as a function of the applied back-gate voltage. Conductivity shows a linear dependence on the applied gate voltage in the range of −160 ≤ V g ≤ 60 V representing the field-effect mobility of the hole-carriers. For V g > 60 V, a slow conductivity change is observed indicating that electron-mobility is impaired in this graphene film. Linear curve-fitting was used to extract the line slope to determine hole-mobility. A R 2 value of 0.986 was achieved in the fit that corresponded to a conductivity change of 0.0083 mS V g −1 . This value was then used to determine a mobility of 721 cm 2 ox ox for the 300 nm thick SiO 2 gate oxide. The extracted back-gated mobility value is comparable to the mobility values extracted from graphene transistor measurements of graphene films prepared in a similar manner 33,34 . Air-exposed graphene on SiO 2 is typically found to be p-doped 34 , consistent with the charge neutrality point V CNP at a positive bias 34 . Even though not demonstrated here, mobility mapping with terahertz reflection measurement across the sample could in principle be performed by raster scanning the sample at different back-gate voltages. Graphene conductivity mapping on Ge. Utilising a reflection rather than a transmission geometry allows contactless mapping of the graphene conductivity on a wider range of substrates. In terms of integrated graphene CVD manufacturing and process optimisation, a clear need is to probe graphene at each process step. We here so far discussed two examples of graphene conductivity measurements after graphene transfer on substrates like sapphire and Si/SiO 2 , but other substrates such as flexible polymers are in principle possible too provided sufficient contrast can be observed, which is generally the case as the real part of the refractive index is less than 2 and the extinction coefficient is negligible at less than 2 THz 35, 36 . This has been highlighted already for transmission measurements 7 . The challenge of adequate characterisation of the graphene still on the growth catalyst/support however remains. For instance, even individual Raman measurements of graphene on transition metals can be challenging, not to speak of mapping large graphene areas. As a first step towards direct contactless graphene conductivity mapping on technologically relevant growth substrates, we here focus on Ge substrates. As a starting point we first transferred graphene on intrinsic Ge(110), and then applied our method to graphene directly grown on Ge(110). The Ge (110) orientation is chosen because it yields higher quality graphene under the CVD process used here 19,37 . For our initial analysis, the imaginary part of the complex refractive index of Ge can still be assumed to be negligible. For the undoped Ge(110) reference substrate, the graphene conductivity can be expected to be close to the values measured on sapphire, except some minor deviations based on, for instance, charge transfer between graphene and Ge 38 . Hence we can also assume that the real conductivity spectra remain constant over the THz spectral range 7,10,29 . For reflection measurements on highly doped Ge, the assumption of a negligible extinction coefficient will no longer hold 39 and therefore the frequency dependent extinction coefficient needs to be accounted for in the conductivity analysis. Accurate measurement of the extinction coefficient in turn would require very high precision alignment between the sample and reference 13 . Analogous to our measurements of graphene on sapphire, the graphene film transferred on Ge was measured with TPI at a step size of 200 μm where 15 waveform traces were averaged. A conductivity map was then generated from the raster scanned measurement by applying the aforementioned analysis where the region of graphene coverage was obtained by intensity masking the data, where the graphene covered area corresponds to regions with higher reflection relative to Ge. The measured substrate refractive index of Ge was approximately 4 in close agreement with literature 16 . It should be noted that the contrast i.e. reflection change from bare Ge substrate to graphene covered area is approximately 5% for graphene with conductivity of 1.2 mS. This can be compared against graphene on sapphire where for the same conductivity, the contrast is approximately 10%, allowing a clear discrimination between graphene covered areas and the bare substrate. With a lower contrast, the measurement becomes more susceptible to noise and therefore we defined the threshold for intensity masking by considering an average of the substrate reflection. Figure 4a shows the measured conductivity map of transferred graphene on Ge(110) (see Supporting Information for corresponding optical microscope image). Here the blacked out pixels in the region of interest correspond to pixels where the conductivity was not computed because the reflection was below the defined threshold. For all other pixels, the average conductivity over 0.6 to 0.9 THz was estimated (see Supporting Information). On the conductivity map, there are visible spots of low conductivity in part due to the small signal fluctuations as a result of the lower contrast or from the graphene film, though the former is more likely. The conductivity histogram in Fig. 4b shows that the distribution does not closely follow a Gaussian function, but is centred around 1 mS in Figure 3. The measured terahertz reflection from a randomly selected position on the graphene on borondoped Si/SiO 2 substrate as function of applied back-gate voltage in (a) and an expanded view on the peak of the terahertz reflection in (b). The corresponding real conductivity spectra in (c) and the average real, gate-induced conductivity from 0.4 to 0.9 THz as a function of V g for 3 distinct positions on the graphene area. Circles are experimental data and the lines are linear fits to the data for −160 V < V g < 60 V and 60 V < V g < 160 V for fieldeffect hole and electron mobilities, respectively. agreement with the conductivity value measured on sapphire support (Fig. 2). The lower average conductivity may be due to the aforementioned reduced contrast and hence increased sensitivity to the noise inherent to commercial fibre-coupled THz-TDS systems, as well as a smaller graphene covered area leading to the increased influence of boundary areas where lower graphene conductivity is generally observed. There is also qualitative agreement in the shape of the histogram compared to Fig. 2g where a sharp roll-off is observed for high conductivities as opposed to the lower conductivity values. Overall, the Ge measurements show that graphene film conductivity mapping by terahertz reflection spectroscopy on lightly doped substrates is possible, with the obtained conductivity values in agreement to our sapphire measurements. Note that the conductivity value is expected to vary slightly when comparing graphene on sapphire and Ge due to substrate induced doping 38 . As final validation, we apply our method to graphene directly grown on Ge(110). After CVD on Ge of a uniform graphene coverage (see Methods), we subsequently used drop casted PMMA photoresist and oxygen plasma etching (with oxygen partial pressure 50 mbar at 50 W for 8 s) to define graphene patterns to allow for reference contrast against the plain underlying Ge. After oxygen plasma etching, the remaining photoresist was removed with acetone. We then measured with TPI at a step size of 75 μm where 50 waveform traces were averaged for one pixel. Figure 4c shows the peak electric field map where a contrast of approximately 5% can be observed between the graphene and the exposed underlying Ge. After graphene CVD, the Ge substrate appears to be highly doped (0.25 Ω·cm) as verified by standard conductivity measurements. This therefore means that although qualitative analysis of the intensity map is possible, a quantitative analysis to obtain the conductivity value cannot be reliably performed on this sample because the underlying assumption of negligible extinction coefficient is no longer valid. For the accurate determination of the extinction coefficient, high precision sample alignment would be required for substrate Drude model fitting that is outside the scope of this paper. The proposed technique, however, would be possible for lightly doped substrates (1-10 Ω·cm) where the extinction coefficient still remains negligible for frequencies greater than 0.5 THz as in the case of Si 40 . In principle, these substrates could also work with transmission THz-TDS, though the measurement may slightly overestimate the conductivity due to increased carrier absorption as the terahertz pulse makes a return path inside the substrate. The fact that there is an observable contrast in the reflection measurement is promising for future conductivity mapping of graphene synthesised directly on Ge. It can be expected that in samples where contrast is lower, either because of a higher substrate refractive index at terahertz frequencies, a lower graphene conductivity, or a combination thereof, the conductivity measurement in reflection geometry becomes increasingly susceptible to the effect of signal fluctuations and hence become less reliable. For instance, on growth substrates such as Cu 2 , no contrast could be observed relative to Cu for a sample of similar graphene conductivity because of the high Cu conductivity (59 MS/m) leading to a refractive index (~730) at least two orders of magnitude larger than that observed for sapphire or Ge at 1 THz. At the same time, it should also be pointed out that with the possibility of making changes to the graphene manufacturing process, such as intercalation of a micron-thick oxide layer 41 , these changes potentially could be useful to increase the measurement contrast for future in-line characterisation of graphene on e.g. Cu. Conclusion In summary, we have demonstrated the feasibility and potential of measuring the electrical conductivity of CVD graphene with THz-TDS in reflection geometry. Using terahertz transparent sapphire support, we have validated the technique against current state-of-the-art THz-TDS transmission measurements, where after taking into account the differences in terahertz spot sizes, we find a close agreement between the conductivity histograms. Using back-gated Si/SiO 2 support, we have further demonstrated the sensitivity of the technique to resolve conductivity changes during electrostatic gating and hence the ability to directly determine graphene mobilities, whereby our values were consistent with standard electrical measurements. To illustrate that this technique has the potential as a tool for in-line graphene quality monitoring, we showed that the graphene conductivity can also be directly mapped on a substrate like Ge, where only half the contrast is seen relative to the sapphire substrate due to an approximately 30% increase in refractive index in the relevant frequency range. For the graphene films synthesised directly on Ge(110), the Ge substrate became highly doped after graphene synthesis and hence high precision alignment is required to accurately determine graphene conductivity. It should be noted that the proposed technique also assumes flat conductivity spectrum, which is not the case for all graphene samples, such as graphene grown on a single Cu crystal. For these samples, provided the shape of the conductivity spectra is known, a similar slope fitting method could be used. A more robust approach, however, would involve high precision alignment that is the subject of ongoing research. Our data shows that while measurement and analysis are understandably more complex and can be less robust when measuring in reflection compared to transmission, especially considering that there is less terahertz interaction with the sample, terahertz time-domain reflection spectroscopy is a highly interesting contactless, quantitative characterisation technique with clear potential to complement existing characterisation techniques for graphene and other related 2D materials and to open new opportunities for the rapid screening of large-area 2D crystals and films, which is crucial towards emerging applications and industrial development of these materials.
7,175
2017-09-06T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Long-Term Pterostilbene Supplementation of a High-Fat Diet Increases Adiponectin Expression in the Subcutaneous White Adipose Tissue Pterostilbene (Pt) is a natural phenol found in blueberries and grapes; it shows remarkable biomedical activities similar to those of resveratrol, but its higher bioavailability is a major advantage for possible biomedical applications. Our group has recently demonstrated that long-term (30 weeks) administration of Pt to mice maintained on a high-fat diet counters weight gain and promotes browning of subcutaneous white adipose tissue (sWAT). By Real-time quantitative PCR and Western Blot analysis of the sWAT and visceral white adipose tissue (vWAT) from the same mice used in the previous study, we show here that Pt induced a long-term increase of Adiponectin, Interleukin 10 and of M2 macrophage marker Cd206. The effects were observed in sWAT, while no significant changes were detected in vWAT. The process taking place seems to mimic that occurring in sWAT during cold-induced browning. Analysis of a few pro-inflammatory cytokines (Interleukin 6, Tumor necrosis factor α) and of the NFkB pathway did not reveal marked effects of Pt supplementation. In summary, the mechanisms and processes through which Pt acts in adipose tissue appear to closely mimic those set in motion by cold-induced browning, and point to a possible impact of experimental conditions in the final output of a nutraceutical intervention. Introduction Pterostilbene (Pt) is a plant phenol, differing from its most popular analog, resveratrol, because two of the three hydroxyl groups of the molecule are methylated. This imparts higher lipophilicity and bioavailability, and thus efficacy. In fact, Pt shows a variety of health promoting properties [1][2][3]; its prolonged consumption has been reported to exert beneficial effects also against obesity [4][5][6][7][8]. We previously demonstrated that long-term (30 weeks) administration of Pt to mice maintained on a high-fat diet countered weight gain and promoted browning of subcutaneous adipose tissue (sWAT) [5]. Brown Adipose Tissue (BAT) is an inducible mammalian organ providing non-shivering thermogenesis, i.e., turning food-derived energy into heat [9,10]. The process of "browning" is characterized by the formation of thermogenic "beige" adipocytes in subcutaneous white adipose tissue, and is considered a hopeful approach to the problem of obesity in modern societies [11,12]. Beige adipocytes express a set of specific genes, but also brown adipocyte-associated genes such as Ucp1, and have features in common with brown cells, such as abundant mitochondria. Physiological factors inducing browning are, for example, exposure to cold [13], physical exercise [14,15], or thyroid hormones [16,17]. Cold exposure leads to the release of norepinephrine by the sympathetic nervous system, which leads to activation of β3-adrenergic signalling; downstream signalling then branches and interacts with other pathways in complex patterns [18][19][20]. A key feature of cold-induced browning in mice is Adiponectin elevation in subcutaneous fat deposits (but not in epididymal WAT or interscapular BAT) [21,22], which persists for at least weeks. Of note, circulating Adiponectin instead decreases, and intravenously-injected Adiponectin accumulates in sWAT [23]. Cytokines released by immune cells in the adipose tissue play a crucial role in adipose tissue homeostasis. They regulate the thermogenic activity of brown and beige adipocytes [24] (i.e., Interleukin 4, Interleukin 13 [25,26]), but also contribute to the low grade chronic inflammatory state commonly observed during obesity [27,28], which in turn is implicated in the development of obesity-related comorbidities such as insulin resistance, type 2 diabetes, or cardiovascular disease. Tumor necrosis factor (TNFα), for example, is a pro-inflammatory cytokine, but it has also been reported to increase upon exposure to low temperatures [29,30]. Interleukin 6 (IL6), which can act as a pro-inflammatory cytokine or an anti-inflammatory myokine, has been reported to increase about 6-fold upon exposure to cold (4 • C; 15 days) [29]. Non-shivering thermoregulation is impaired in IL6-deficient mice [31][32][33]. In this work, we wanted to investigate the impact of Pt on the expression of Adiponectin and a few anti and pro-inflammatory markers in two different adipose tissue depots: the subcutaneous (inguinal) WAT (sWAT), which is the most prone to browning, and the visceral (epidydimal/periovaric) one (vWAT), which has been reported to be the most affected by chronic inflammation during obesity. Importantly, the experimental design included both males and females [34][35][36]. The results we obtained highlighted that Pt has an effect on the sWAT which resembles that induced by cold exposure, and points out that the beneficial effects of Pt in our experimental model are mediated by mechanisms other than an anti-inflammatory effect on vWAT. Animals C57BL/6 mice were housed in the SPF (specific pathogen free) facility of the Department of Pharmaceutical and Pharmacological Sciences (Padova, Italy); food and water were provided ad libitum. Procedures were all approved by the University of Padova Ethical Committee for Animal Welfare (OPBA) and by the Italian Ministry of Health (Permit Number 211/2015-PR), and conducted with the supervision of the Central Veterinary Service of the University of Padova, in compliance with Italian Law DL 26/2014, embodying UE Directive 2010/63/EU. Animal Treatments Animal cohort was the same used in [5]; briefly, 1 month after birth (i.e., after weaning), mice were divided in three experimental groups (16 animals/group; 8 males and 8 females): the STD group was fed a standard diet; the HFD group was fed a high-fat diet (60% calories from fat, OpenSource Diets TM , #D12492, Research Diets Inc; New Brunswick, NJ, USA); and the HFD + Pt group was fed a high-fat diet supplemented with Pt (90 mg/Kg body weight/day). The high-fat diet was smashed, thoroughly mixed with the appropriate amount of Pt (from Waseta Int. Trading Co., Shangai, China), and then compacted again into pellets. Pt amount/g diet was calculated as follows: 90 × mean body weight (Kg)/mean daily diet consumption (g); since body weight changed during the experiment, Pt amount in the diet was increased accordingly during time. To avoid compound degradation, Pt-supplemented diet was prepared twice a week, stored at 4 • C and changed in animal cages every day. Sample size was determined through power analysis (power = 0.8; effect size = 1.5; α = 0.05). Mice were maintained under the different dietetic regimens for 30 weeks; at the end of this period, they were sacrificed after being fasted for 4 h. Subcutaneous (inguinal) and visceral (epidydimal or periovaric) adipose tissues (sWAT and vWAT, respectively) were collected, immediately frozen in liquid nitrogen and then stored at −80 • C until extraction (see below, Section 2.3). RNA and Protein Extraction RNA and protein extractions were conducted as described in [5]. Briefly, the frozen tissue was grinded in liquid nitrogen with mortar and pestle, and the material was then divided evenly into two test tubes, one for RNA and one for protein extraction. Total RNA was extracted using the TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions, adding a centrifugation step (5 min at 12,000× g at 4 • C) after TRIzol lysis, as recommended by the protocol to remove most of the tissue fat content. RNA content of each sample was quantified with NanoDrop (Thermo Fisher Scientific, Waltham, MA, USA). Protein extraction was performed by resuspending the frozen tissue powder (about 80 mg) in 0.3 mL RIPA lysis buffer containing protease and phosphatase inhibitors (Merck Life Science S.r.l., Milano, Italy). The samples were incubated for 30 min on ice and then physically disaggregated with an electric pestle (Merck Life Science S.r.l., Milano, Italy). The lysates were then centrifuged at 20,000× g for 15 min at 4 • C to remove fat, and finally transferred to a new tube. Quantification of the total protein content in each lysate was performed using the BCA assay (Thermo Fisher Scientific, Waltham, MA, USA). Quantitative Real Time PCR (RT-qPCR) The Superscript VILO reverse transcriptase kit (Thermo Fisher Scientific, Waltham, MA, USA) was used to retro-transcribe 400 ng of total RNA from each sample, following the manufacturer's protocol. Expression levels of target genes were then analyzed by RT-qPCR, using the primers listed in Table 1. All primers were designed with the Primer3 software (version 4.1.0, Whitehead Institute for Biomedical Research, Steve Rozen, Andreas Untergasser, Maido Remm, Triinu Koressaar and Helen Skaletsky, Cambridge, MA, USA). The reactions (8 ng cDNA/reaction) were performed using iQ SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) on a CFX Connect Real-Time System (Bio-Rad, Hercules, CA, USA). All samples were run in triplicate, and Gapdh was used as reference gene. Relative gene expression levels were calculated using the 2 ∆∆Ct method [37]. Table 1. List and sequences of the primers used for RT-qPCR analysis. Pt Quantification in the Adipose Tissue Pterostilbene and metabolites were extracted from sWAT and vWAT as described in [38]. Samples were finally analyzed by high performance liquid chromatography with UV detection (HPLC/UV; 1290 Infinity LC System, Agilent Technologies, Santa Clara, CA, USA) using a reverse phase column (Zorbax Extend-C18, 1.8 µm, 50 × 3.0 mm i.d.; Agilent Technologies) and a UV diode array detector (190-500 nm). Solvents A and B were water containing 0.1% trifluoroacetic acid and acetonitrile, respectively. The gradient for B was as follows: 10% for 0.5 min, then from 10% to 100% in 3.5 min, 100% for 1 min; the flow rate was 0.6 mL/min and the column compartment was maintained at 35 • C. Eluate was preferentially monitored at 286, 300 and 320 nm. Concentration of Pt was determined as described in [38,39]. Western Blot Proteins (50 µg for each sample) were separated by SDS-PAGE (Pre-cast NuPAGE 4-12% Bis-Tris Gels, Life Technologies, Carlsbad, CA, USA). After electrophoretic separation, proteins were transferred to PVDF membranes (Immobilion-FL, 0.45 µm, Merk Life Science S.r.l., Milano, Italy). For total protein staining, membranes were incubated for 2-3 min at room temperature with a Pounceau Red 0,5% (w/v) solution in 5% (v/v) acetic acid (Merck Life Science S.r.l., Milano, Italy), and then rinsed with milliQ water to wash out the background stain. Images of the Pounceau Red staining were acquired in bright field using a UVITEC Eppendorf apparatus. Membranes were then rinsed with milliQ water and destained with Tris-buffered saline buffer supplemented with 0.1% Tween-20 (TBS-T; Merck Life Science S.r.l., Milano, Italy). The membranes were saturated with 5% bovine serum albumin (Merck Life Science S.r.l., Milano, Italy) in TBS-T buffer for 1 h and then incubated overnight at 4 • C with the primary antibody. Primary antibodies used were anti-β-actin (dil. Statistical Analyses Statistical analysis was performed using GraphPad Prism Software (version 8, Graph-Pad Software, San Diego, CA, USA). D'Agostino and Pearson omnibus normality test was performed to assess the normal distribution of the sample population. Comparisons between two groups were performed with Student's t-test, when the sample population was normally distributed; otherwise, the non-parametric Mann-Whitney test was performed. Comparisons between three or more groups were performed with Brown-Forsythe and Welch ANOVA tests if data were normally distributed, otherwise with the non-parametric Kruskal-Wallis test. Significance in comparisons was labeled in the figures as follows: * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001. Spearman's correlation was performed to evaluate the association between gene expression levels. Effects of Pt on Adiponectin Since Adiponectin is an adipokine with a crucial role in insulin signalling and browning induction [21,40], we evaluated the alterations in its expression by RT-qPCR in two different WAT depots, namely the sWAT and the vWAT. Compared to the STD group, chronic feeding with high-fat diet (HFD) significantly decreased the expression of Adiponectin selectively in the sWAT (Figure 1a). Supplementation with Pt (HFD + Pt group) signif-icantly increased the Adiponectin transcript to levels comparable with those observed in mice fed a standard diet. HFD also decreased the expression of the anti-inflammatory M2macrophages marker Cd206-which was partially reversed by Pt supplementation-and of Il10, an anti-inflammatory cytokine produced by M2 macrophages (Figure 1a). In this latter case the Pt-induced "rebound" did not achieve significance, but a trend seemed to be present. Of note, a correlation between the levels of Adiponectin and Il10 transcripts in the sWAT from HFD + Pt mice seems to be present (p ≤ 0.05, Spearman's correlation coefficient (r) = 0.6; Figure 1d). None of these effects was observed in vWAT (Figure 1b). Data were also analyzed considering males and females separately (Supplementary Figure S1); since we did not observe sex-dependent differences in the analyzed markers, data from males and females were always averaged and shown together. To confirm the effect also at the protein level, we performed Western Blot (WB) analysis; since a comparative analysis can be performed only between samples run on the same blot/gel, and the main focus of the work was to uncover the effects of Pt supplementation, this analysis was performed only on sWAT samples from HFD and HFD + Pt mice. The results confirmed the Pt-driven increase of Adiponectin and IL10 also at the protein level ( Figure 2; data considering males and females separately are shown in Supplementary Figure S2). Adiponectin is produced by the adipose tissue and then released in the bloodstream to reach target organs such as liver, muscles, and brain. Thus, we also evaluated by WB analysis the levels of circulating Adiponectin. The results showed no differences between HFD and HFD + Pt mice (Figure 3). . Adiponectin levels were normalized to the total protein levels of each sample. Mean values ± SEM are shown; data from males and females averaged together. n ≥ 8 for each condition (n ≥ 4 for males; n ≥ 4 for females). Statistical analysis was performed using Mann-Whitney test. Effects on CREB CREB (cAMP response element-binding protein) is a transcription factor that is activated by phosphorylation and promotes the transcription of genes involved in various processes. In the context of obesity, CREB was found to be activated in adipose tissue, where it promotes insulin resistance by triggering expression of the transcriptional repressor ATF3 and thereby downregulating expression of Adiponectin as well as of the insulin-sensitive glucose transporter 4 (GLUT4) [41]. Consistently, we found that Pt supplementation decreased CREB phosphorylation and thus its activation in comparison to the HFD group ( Figure 4). (b) representative images of WB bands (1 from a male and 1 from a female for each condition). Protein levels were normalized to β-actin. Mean values ± SEM; data from males and females averaged together. n = 4 for each condition (n = 2 for males; n = 2 for females). Pt Quantification in WAT Depots To exclude a depot-specific effect due to a specific accumulation of Pt, we decided to assess the concentration of Pt and its main Phase II metabolites (i.e., Pt-sulfate and Pt-glucuronide) by HPLC/UV analysis of the vWAT and sWAT from each animal. In most cases, we found negligible levels of Pt metabolites (<0.1 nmoles/g tissue); in only one sample (out of 20 analyzed), Pt-sulfate was detected, at a concentration of 0.5 nmol/g. Non-metabolized/intact Pt was the main specie present in the adipose tissues analyzed. The comparison between sWAT and vWAT did not show any significant difference in Pt accumulation, thus suggesting that the increase observed for Adiponectin, Cd206 and Il10 in sWAT is not attributable to a higher concentration of the phenol in this depot ( Figure 5). Effects on Adipose Tissue Inflammation Since obesity is generally associated with a low-grade chronic inflammation [42], we analyzed the expression of a few pro-inflammatory cytokines (Tnfα, Il6) and antiinflammatory Il10 in the vWAT, which is recognized to be the adipose tissue depot mostly affected by inflammatory processes. However, no significant effects induced by HFD nor by Pt supplementation were observed, aside from a non-significant increase of Tnfα gene expression in HFD fed mice compared to the STD group ( Figure 6; data considering males and females separately are shown in Supplementary Figure S3). As described in Section 3.1, analysis of the same markers in sWAT revealed that the obesogenic diet caused a reduction of the anti-inflammatory cytokine Il10 (Figures 1a and 6a), while Pt supplementation increased IL10 levels ( Figure 2). Interestingly, we found that Pt paradoxically caused a significant increase of Tnfα gene expression. A similar increase has been observed in the serum of mice exposed to cold [29], and thus seems to indicate that the observed increase in Tnfα could be part of the browning process induced by Pt. Since pro-inflammatory cytokines are produced as a result of the activation of the NFkB pathway, we also investigated by Western Blot analysis the effects of Pt on the phosphorylation/activation of NFkB and IkBα. Also in this case, as explained in Section 3.1, WB analysis was performed only with WAT samples from HFD and HFD + Pt groups. The results showed that chronic long-term supplementation with Pt did not exert any effect on the NFkB pathway, both in vWAT and sWAT (Figure 7; data considering males and females separately are shown in Supplementary Figures S4 and S5). . Protein levels were normalized to total NFkB and IKBα. Mean values ± SEM; data from males and females averaged together. n ≥ 6 for each condition (n ≥ 3 for males; n ≥ 3 for females). Discussion Although Pt has been already demonstrated to exert multiple beneficial effects on obesity [6][7][8]43], this study presents some relevant novelties in the field. First, such a long-lasting treatment (30 weeks) had been scarcely investigated so far, with few exceptions concerning for instance effects of the HFD on blood brain barrier function [44], sperm motility [45] and osteogenesis [46]. The long-term supplementation with Pt also supported the lack of toxicity of this natural compound (which was already reported in [47]). Second, males and females were both included in the study: this aspect is particularly important, since females are often underrepresented in the biomedical research with rodents [35,36,48]. The administered dose (90 mg/Kg body weight/day) was selected to ensure Pt levels in the adipose tissue in the 5-10 µM range [5], i.e., in the concentration range at which Pt was demonstrated to exert beneficial effects on adipocytes [49]. This dosage roughly corresponds to a 450 mg daily dose for a 60 Kg person [50]; this is more typical of a dietary supplementation than a dietary intake, since Pt content in blueberries (one of its richest sources) is about 100-520 ng/g [51,52]. We previously published that Pt is able to increase the expression of various genes regulating the browning/beigeing process in the sWAT, such as Ebf2, Sirt1, Pgc1α and Pparγ [5]. We demonstrate here that these effects were paralleled by a local increase of Adiponectin and IL10 (both transcript and protein), alongside an increase in the transcription of the marker for M2 anti-inflammatory macrophages Cd206. The increase of these markers was selectively observed in the sWAT. The differential effect of Pt on sWAT and vWAT cannot be ascribed to differences in Pt accumulation, since similar Pt levels were detected in sWAT and vWAT ( Figure 5). One of the possible causes underlying the observed depot-selective response could be the intrinsic heterogeneity of these white fat depots, each having a specific cellular composition, microenvironment and a unique interaction with stromovascular cells (i.e., macrophages, neutrophils, lymphocytes, fibroblast, and endothelial cells) [53][54][55]. Our data are consistent with what observed by Hui and co-authors [21]: cold exposure led to a marked increase of Adiponectin both at the transcript and protein levels; also in this case, the accumulation of Adiponectin was selectively observed in sWAT and not in vWAT. The adipokine binds to T-cadherin in M2 macrophages and stimulates their proliferation, skewing the macrophage phenotype distribution towards the antiinflammatory M2 phenotype. M2 macrophages in turn supply catecholamines for the activation of beige cells [56][57][58]. Since chronic adipose tissue inflammation is often associated with obesity and is considered to play a major role in the onset of obesity-associated disorders [42], we also analyzed the expression levels of a few pro-and anti-inflammatory markers (Il6, Il10, and Tnfα), both in the vWAT (that is supposed to be the most affected by inflammation), and sWAT. The results obtained from vWAT showed that HFD mice had only a slight increase of Tnfα compared to STD mice, and this was partially attenuated by Pt supplementation; none of these variations however reached statistical significance. Evaluation of other markers of inflammation such as Il6 and Il10, as well as the NFkB pathway activation, revealed no effect of the diet or of Pt. Various aspects might contribute to these unexpected results: for example, most of the studies observing an HFD-induced pro-inflammatory effect applied a much shorter feeding protocol than ours. Furthermore, a few reports highlighted that the effects of the diet on low-grade inflammation may be directly correlated to the composition of the diet used [59,60]: feeding mice with very HFD (vHFD) increased weight, fat mass, and plasma and liver triglycerides compared to a low-fat or a moderate-fat diet. However, vHFD-fed mice did not develop metabolic endotoxemia and low-grade inflammation, compared to the other groups. A possible explanation might be found considering intestinal globlet cells and the composition of gut microbiota: the former plays an important role in the modulation of mucus coat composition, while the latter is involved in the frontline host defense against endogenous and exogenous irritants. Indeed, the intestine of vHFD-fed mice was characterized by an increased number of globlet cells, while no changes were observed on gut microbioma composition [59,60]. Finally, the absence of an effect of Pt on the considered inflammation-related markers might find an explanation in the intervention of adaptive/homeostatic processes, which clearly deserve consideration. Analysis of the same markers in sWAT, on the other hand, revealed that Pt significantly increased Tnfα transcription, when compared to HFD group. Similar results were reported in a few studies studying browning and cold-exposure [29,33]: an increase of several cytokines was reported (Il6 and Tnfα included), and this was hypothesized to play an important role in the cross-talk between WAT and other tissues. Collectively, our data suggest that Pt could be able to activate sWAT browning through a mechanism likely involving pCREB and Adiponectin. As far as we know, this is the first evidence demonstrating that Pt is able to promote an increase of Adiponectin in vivo. Conclusions Altogether, the results obtained in this study suggest that Pt might exert a slimming effect through the activation of molecular mechanisms similar to those activated during the physiological response to stimuli such as cold exposure, and culminating in the activation of WAT browning. Indeed, a nutraceutical approach for the prevention of obesity mimicking physiological mechanisms could be advantageous because of the lower risk of side effects, compared to pharmacological agents. Surprisingly, we did not observe any significant effect of HFD or Pt on vWAT inflammation, suggesting that experimental variables used in preclinical studies, such as diet composition or duration of the treatment, are aspects that deserve attention. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nutraceuticals2020008/s1: Figure S1: Gene expression analysis of Adiponectin, Cd206 and Il10 in sWAT and vWAT from STD, HFD and HFD + Pt mice. Data from males and females considered separately. Figure S2: Adiponectin and IL10 protein levels in sWAT from HFD and HFD + Pt mice. Data from males and females considered separately. Figure S3: Gene expression analysis of Il6, Il10 and Tnfα in sWAT and vWAT from STD, HFD and HFD + Pt mice. Data from males and females considered separately. Figure S4: pNFkB and pIKBα protein levels in sWAT from HFD and HFD + Pt mice. Data from males and females considered separately. Figure S5: pNFkB and pIKBα protein levels in vWAT from HFD and HFD + Pt mice. Data from males and females considered separately. Data Availability Statement: The data that support the findings of this study are available from the corresponding authors upon request.
5,471.2
2022-05-30T00:00:00.000
[ "Biology" ]
Early action on HFCs mitigates future atmospheric change As countries take action to mitigate global warming, both by ratifying the UNFCCC Paris Agreement and enacting the Kigali Amendment to the Montreal Protocol to manage hydrofluorocarbons (HFCs), it is important to consider the relative importance of the pertinent greenhouse gases and the distinct structure of their atmospheric impacts, and how the timing of potential greenhouse gas regulations would affect future changes in atmospheric temperature and ozone. HFCs should be explicitly considered in upcoming climate and ozone assessments, since chemistry-climate model simulations demonstrate that HFCs could contribute substantially to anthropogenic climate change by the mid-21st century, particularly in the upper troposphere and lower stratosphere i.e., global average warming up to 0.19 K at 80 hPa. The HFC mitigation scenarios described in this study demonstrate the benefits of taking early action in avoiding future atmospheric change: more than 90% of the climate change impacts of HFCs can be avoided if emissions stop by 2030. Introduction This year, as countries take action to mitigate global warming, both by ratifying the UN Framework Conventional on Climate Change (UNFCCC) Paris Agreement (http://unfccc.int/resource/docs/2015/ cop21/eng/l09r01.pdf) and enacting the Kigali Amendment to the Montreal Protocol to manage hydrofluorocarbons (HFCs) (www.unep.org/newscentre/Default. aspx?DocumentID=27086&ArticleID=36283&l=en), it is important to consider the relative importance of the pertinent greenhouse gases (GHGs) and the distinct structure of their atmospheric impacts, and how the timing of potential GHG regulations would affect future changes in atmospheric temperature and ozone. Atmospheric concentrations of HFCs are increasing rapidly, as HFCs replace the ozone-depleting substances (ODSs, e.g., the chlorofluorocarbons (CFCs)) (United Nations Environment Programme (UNEP) 2012; see figure 1(a)). This growth is a response to increasing global demand for HFC applications such as air conditioning and refrigeration. While HFCs are projected to make only a minor contribution to future ozone depletion (Hurwitz et al 2015), many HFCs (like the CFCs and hydrofluorocarbons (HCFCs) they replace) are strong radiative forcers ( figure 1(b)). The five HFC species expected to make the largest contributions to surface radiative forcing by the mid-21st century, and in turn cause the largest atmospheric impacts, are HFC-23, HFC-32, HFC-125, HFC-134a and HFC-143a (Velders et al 2015). These HFC species have 100-year global warming potentials in the range of 700-14 000 (WMO 2014) (i.e., 1 kg of HFC emissions on average causes thousands of times more surface warming than does 1 kg of CO 2 ) and have relatively long atmospheric lifetimes of 14 to 228 years (SPARC 2013). HFCs are expected to make increasing contributions to global climate change in the coming decades, as atmospheric concentrations of HFCs rise (Forster and Joshi 2005, United Nations Environment Programme (UNEP) 2012). However, HFC emissions scenarios, and thus their resulting climate impacts, have largely been based on statistical and socio-economic projections of HFC emissions inventories (e.g., Velders et al 2015). Hurwitz et al (2015) demonstrated the potential atmospheric temperature impacts of HFCs in 2050 in a coupled chemistry-climate model, which incorporates the interactions between atmospheric chemistry, radiation and dynamics. The present study extends that of Hurwitz et al (2015) by quantifying Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. both the relative contribution of HFCs to future atmospheric change, and the effects of several HFC mitigation scenarios. Relative impact of the HFCs in 2050 The atmospheric impacts of increasing carbon dioxide (CO 2 ), methane (CH 4 ), nitrous oxide (N 2 O) and HFCs, and decreasing ODSs, can be distinguished by comparing NASA GSFC atmospheric 2D model sensitivity simulations ( This study examines the temperature response to HFCs in the upper troposphere and stratosphere. As the GSFC 2D model is an atmosphere-only model, boundary conditions must be specified at the surface. These surface boundary conditions are based on NASA's Modern-Era Retrospective Analysis for Research and Applications (Rienecker et al 2011). Since explicit ocean-atmosphere model calculations of the surface temperature response to HFCs have yet to be performed, the HFC responses are estimated by scaling to those of HCFC-22 (Kratz et al 1993). The GSFC 2D model is relatively insensitive to the imposed surface temperature boundary conditions above 10 km (as discussed by Hurwitz et al 2015). The modeled temperature responses to increased atmospheric concentrations of these GHGs are the result of full coupling between the model's radiation, transport and stratospheric chemistry components. Figure 2 (orange shading) shows that HFCs augment the projected upper tropospheric warming due to CO 2 , and somewhat reduce stratospheric cooling. The magnitude of the atmospheric warming response to HFCs depends on the scenario for future emissions; a peak global mean warming of 0.19 K at 80 hPa is simulated when HFCs increase according to the Velders et al (2015) SSP5 scenario. In the upper troposphere, at 250 hPa, HFCs warm 10%-20% as much as CO 2 . Total column ozone decreases by 0.1 DU due to projected increases in HFCs (not shown), as compared with the 2.5-4.0DU increase due to increasing CO 2 (e.g., Li et al 2009). Like the impact of increasing HFCs, future decreases in ODSs (because of the Montreal Protocol) will lead to warming of the upper troposphere and stratosphere. As ODSs decline, so does stratospheric ozone depletion and therefore solar (UV) heating is increased. In contrast, the contributions of CH 4 and N 2 O have the same pattern as CO 2 (i.e., upper tropospheric warming and stratospheric cooling), but their smaller contributions to radiative forcing correspond with their relatively smaller atmospheric temperature impacts. The IR absorption by CO 2 , CH 4 and N 2 O directly impacts atmospheric temperatures (i.e., enhancing tropospheric warming and stratospheric cooling) and modifies the stratospheric Brewer-Dobson circulation. This global mass circulation is important in determining the thermodynamic balance of the stratosphere, as well as the distribution and atmospheric lifetime of trace species. The GHGinduced acceleration of the Brewer-Dobson circulation is a robust result among chemistry-climate models, including the GSFC 2D model, with a projected range of ∼2%-3.2% per decade, depending on the GHG scenario (Fleming et al 2011, Butchart 2014). CH 4 and N 2 O have additional and important indirect effects on the atmospheric temperature structure due to their chemical reactivity (i.e., in changing the concentrations of radiatively active species such as ozone and water vapor). The structure of the multi-decadal atmospheric temperature responses to CO 2 and the ODSs, shown in figure 2, are consistent with previous model studies. Separating the stratospheric effects of CO 2 and the ODSs, Shepherd and Jonsson (2008) found that, for the 2010-2040 period, the response is dominated by cooling by CO 2 , which increases with height, while declining ODSs lead to warming of the upper stratosphere. However, Shepherd and Jonsson (2008) did not consider the effects of heterogeneous ozone loss due to ODSs in the lower stratosphere, nor the effects of the other major GHGs. (2011) scenarios, then are eliminated as of 2020, 2030 and 2040, respectively (red, green and blue lines, figure 1). Projected emissions for the other GHGs (e.g., CO 2 ) are used in all four simulations (as in section 2.1). As compared with business-as-usual projections, these HFC mitigation scenarios represent 95%, 77% and 47% reductions in cumulative HFC emissions between 2015 and 2050 (table 1; figure 1(a)). Likewise, much of the projected surface radiative forcing is avoided ( figure 1(b)). Miller and Kuijpers Eliminating HFC emissions as of 2020 essentially avoids the HFC-related upper tropospheric and stratospheric warming that would have occurred by 2050 (table 1; red line, figures 1(c) and (d). The lower stratospheric ozone loss, resulting from the combination of changes in the atmospheric temperature structure and a strengthened Brewer-Dobson circulation (Hurwitz et al 2015), is avoided. More than 90% of the HFCrelated upper tropospheric and stratospheric warming, as well as 90% of the ozone loss, that would have otherwise occurred by 2050 can be avoided by eliminating HFC emissions by 2030 (green line, figures 1(c) and (d). Likewise, 67% of the upper tropospheric warming, approximately 60% of the stratospheric warming and 52% of the ozone loss that would have occurred by 2050 can be avoided by eliminating HFC emissions by 2040 (blue line, figures 1(c) and (d)). Conclusions Separating the relative impacts of climate gases on the future stratosphere was recently recognized as a priority for the 2018 World Meteorological Organisation (WMO) Scientific Assessment of Ozone Depletion (Fahey et al 2016). HFCs should be explicitly considered in this and other upcoming climate and ozone assessments. While smaller than the impacts of increasing CO 2 , the chemistry-climate model simulations presented above demonstrate that HFCs could contribute substantially to anthropogenic climate change by the mid-21st century, particularly in the upper troposphere and lower stratosphere. On 15 October 2016, the parties to the Montreal Protocol agreed to a gradual phase down of HFC production and use (www.unep.org/newscentre/Default. aspx?DocumentID=27086&ArticleID=36283&l=en). While this Kigali Amendment to the Montreal Protocol includes a larger number of HFC species, and considers more subtleties of the transition to climate-friendlier alternatives, the simple HFC mitigation scenarios described in this study demonstrate the benefits of taking early action in preventing future atmospheric change. Multigas mitigation analysis on stabilization scenarios using Table 1. Upper row: 2015-2050 changes in temperature (K) and layer column ozone (DU) in the upper troposphere (10-16 km), lower stratosphere (16-25 km), and upper stratosphere (25-50 km), due to increasing HFCs (following the SSP5 scenario for HFC-32, HFC-125, HFC-134a and HFC-143a (Velders et al 2015), and a business-as-usual scenario for HFC-23 (Miller and Kuijpers 2011)). Lower rows: HFCrelated temperature change, ozone change and emissions avoided by 2050, in three HFC mitigation scenarios in which HFC emissions are eliminated beginning in 2020, 2030 or 2040. The % of 2050 impacts avoided is listed in parentheses. Note that the slight, negative temperature values for the 2020 mitigation scenario indicate recovery to pre-2015 conditions.
2,293.2
2016-11-15T00:00:00.000
[ "Environmental Science", "Chemistry" ]
A Study of K-ISMS Fault Analysis for Constructing Secure Internet of Things Service Although Internet of Things (IoT) technologies and services are being developed rapidly worldwide, concerns of potential security threats such as privacy violation, information leak, and hacking are increasing as more various sensors are connected to the Internet. There is a need for the study of introducing risk management and existing security management standard (e.g., ISO27001) to ensure the stability and reliability of IoT services. K-ISMS is a representative certification system that evaluates the security management level of the enterprise in Korea and is possible to apply as a standardized process to enhance the security management of IoT services. However, there are growing concerns about the quality deterioration of the K-ISMS certification assessment these days because of internet security incidents occurring frequently in K-ISMS certified enterprises. Therefore, various researches are required to improve the accuracy and objectivity of the certification assessment. Since existing studies mainly focus on simple statistical analysis of the K-ISMS assessment results, analysis on the cause of certification assessment fault based on past data analysis is insufficient. As a method of managing the certification inspection quality, in this paper, we analyze the association among the fault items of the K-ISMS certification assessment results using association rule mining which involves identifying an association rule among items in the database. Introduction According to a survey by Gartner, things connected to the Internet are expected to grow to 26 billion, and market size is forecast to reach USD 1 trillion by 2020 [1]. IoT is widely applied in various areas closely related to daily lives such as smart home appliance, smart car, and healthcare. Since multiple networks can be controlled even with a sensor, hacking of an area can be fatal since it can cause security threats in a chain reaction. With the IoT service broadly utilizing the sensor information to provide a wide range of information, the risk of information leak will increase. Malfunction or suspension of IoT devices will also pose very serious threat even to social infrastructure that the economic damage is predicted to reach KRW 17.7 trillion by 2020 [1]. Therefore, there is a need to consider the technical and administrative vulnerabilities of each element, such as sensor, wired and wireless network, and platform of IoT, from the design stage and to review and study the application of the existing standard as the key tool for continuously inspecting them even at the operating stage. Korean government operates the information security management system "K-ISMS" certification system to assess if an organization has established and managed appropriate information security environment. Therefore, the K-ISMS, which is similar to "ISO27001" as the international standard for information security management systems, is designed to improve the information security management level of enterprises and protect them from various security threats [2]. K-ISMS evaluates whether an enterprise has set up a comprehensive information security management system including administrative, technical, and physical protective measures to protect the safety of their information assets, using the 104 certification criteria; it then issues the certification if the enterprise meets the requirements. K-ISMS was introduced in Korea in 2002, and 332 enterprises have acquired the certification. Since the acquisition of the certification by an 2 International Journal of Distributed Sensor Networks enterprise over a certain size became mandatory in 2013, demand for and interest in K-ISMS certification has been gradually increasing [3]. With Internet security incidents (e.g., hacking) occurring frequently in K-ISMS certified enterprises, however, there are growing concerns about the quality deterioration of K-ISMS certification assessment these days. Since the information security management systems of enterprises are different, and fault cases vary, specialization and extensive experiences in certification assessment are required. Moreover, there are limits in maintaining objective and accurate assessment quality, since the 104 criteria should be evaluated within a short period of time. To solve these problems, there were various studies including the case study of faults in K-ISMS [4], the economic effect analysis of K-ISMS certification [5], and the analysis about process of security management for various IT services [6,7]. However, a study using data mining between faults was not performed yet. Thus, we studied to provide a guide to extracting preferred assessment items during the limited assessment period by analyzing the fault pattern that occurs frequently and association through the K-ISMS fault data. For this purpose, we apply the data mining analysis technique in order to analyze the association relationship among the fault items of the K-ISMS certification criteria. The paper is organized as follows. Section 2 introduces association rule mining the most known and used unsupervised algorithms. In Sections 3 and 4, the experiments are performed on K-ISMS fault data. The conclusion is given in Section 5. Theoretical Background Data mining is a knowledge-finding process that extracts unknown useful information by analyzing a large quantity of accumulated data. Among the research studies identifying the hidden pattern in the data, the association rule finding area was studied most widely in many areas of market forecast, medical and IT engineering research [8,9]. The association rule analysis refers to a technique that finds a useful pattern, which is expressed as a "condition-result" formula among data items. The list of association rules extractable from a given data set is compared in order to evaluate their importance level. The measures commonly used to assess the strength of an association rule are the indexes of support, confidence, and interest [10]. The problem of finding association rules → was first introduced in 1993 by Agrawal et al. [11] as the data mining task of finding frequently cooccurring items in a large Boolean transactional database D [12]. Typical applications include retail market basket analysis, item recommendation systems, cross selling, and loss-leader analysis. In the classical framework, an association rule is considered to be interesting if its support (s) and confidence (c) exceed some userdefined minimum thresholds [13]. Support is defined as the percentage of transactions in the data that contain all items in both the antecedent and the consequent of the rule; that is, [14]. Confidence on the other hand is an estimate of the conditional probability of given ; that is, ( ∩ )/ ( ) [13]. Association rule finding consists of the process of identifying an association rule that has the threshold value of predefined support and confidence. This process broadly includes two processes [15,16]. One is "frequent itemset finding, " which finds the itemset that satisfies the support threshold value "minimum support" only as a technique of finding the items that occur concurrently in the transaction. The other is "association rule generation, " which adopts the rule satisfying the confidence threshold value "minimum confidence" only among association rules created from the found frequent itemsets. Frequent Itemset Finding. While finding the frequent itemset, a combination of the itemset that can be created from the given item is created, and the transaction data is searched for the individual itemset that has been created in this manner to check whether the minimum support can be satisfied. When a set of frequent items in the transaction database is = { 1 , 2 , . . . , } and a transaction set composed of transactions = { 1 , 2 , . . . , } is given, transaction is defined as a subset of frequent set ( ⊆ ) [17]. If includes items, it is defined as k-itemset [17,18]. For example, {Beer, Diapers, Milk} are 3 itemsets; the null item has no items at all. If transaction includes all items in itemset , is said to support , expressed as Supp( ). At this time, support count, ( ) can be regarded as a number of transactions including the itemset in question: When the user defines minimum support (minsup) and ( ) ≥ minsup is satisfied, itemset is called frequent itemset [19]. Association Rule Generation. Rule generation is a process of creating an association rule from the frequent itemset found during the "frequent itemset finding" process. Suppose and are a set of items that do not contain the same element: ⊂ , ⊂ and ∩ ̸ = 0 and ̸ = 0 [16] The association rule expresses an association among frequently occurring data in the form of "condition → result (Rule: ⇒ )" rule. At this time, is called LHS (Left Hand Side), and is called RHS (Right Hand Side) [19]. Support and confidence are used as a statistical criterion to verify the validity of the association rule. "Support" is a ratio of the transactions that satisfy items and among all transactions and is expressed as Supp( ⇒ ). At this time, since "support" implies the frequency of the frequently occurring pattern or rule, "support" should have a big value to increase the usefulness of the pattern or rule: "Confidence" is a criterion for implying the strength of the rule. If the rule ⇒ exists, "confidence" refers to a ratio of the transaction that includes at the same time, among the transactions that include . "Confidence" is expressed as conf( ⇒ ). This "confidence" becomes an International Journal of Distributed Sensor Networks 3 index that can measure the accuracy of the conclusion 's rule, if condition is true. Therefore, high "support" enables accurate prediction: Finding an association rule in the data item involves finding an itemset that has higher support and confidence than the user-defined minimum support and minimum confidence. For example, let us assume a situation where in the following association rule candidates are identified from the {bread, egg, milk} itemset [18]. At this time, if the minimum confidence is 70%, the second and third association rules will be selected. In this way, possible combinations of all itemsets are created, and some of them are selected as association rule depending on whether the minimum confidence is satisfied or not. (Rule 1) bread ⇒ egg, milk confidence = 0.3/0.5 = 60%. A strong association rule can be filtered out using the support and confidence criteria. However, there are weakness of support and confidence. Support suffers from the "rare item problem" [20]: infrequent items not meeting minimum support are ignored which is problematic if rare items are important [21]. On the other hand, if the minimum support is low, the find area becomes larger. In addition, high minimum confidence and minimum support do not necessarily mean strong association, and they can occur accidently. Therefore, there is a need for statistical correlation analysis, such as lift and conviction, to solve these problems and find a strong association rule [21]. Interest (or lift) is another statistic which attempts to correct this weakness [22]. Confidence tends to rate rules highly where the consequent ( ) is frequent [23]. For example, if 80% of transactions in a database contain , then the expected confidence of any rule → is 80%, even before taking the influence of on into account. The interest( → ) is defined as the confidence( → ) divided by the proportion of all transactions that contain . This scales the confidence to account for the commonality (or rarity) of [22]. The interest measure [13] is defined over [0, ∞] and its interpretation is as follows: (i) If interest(Lift) < 1, then and appear less frequently together in the data than expected under the assumption of conditional independence. and are said to be negatively interdependent. (ii) If interest(Lift) = 1, then and appear as frequently together as expected under the assumption of conditional independence. and are said to be independent of each other. (iii) If interest(Lift) > 1, then and appear more frequently together in the data than expected under the assumption of conditional independence. and are said to be positively interdependent. Analysis Data. In this paper, we analyzed the fault data of 76 enterprises that received certification assessment in 2013 (uses only the statistical results) and applied the representative "Apriori algorithm" [17][18][19] for association rule mining. The average fault rate of those enterprises was found to be 15%. (The explanation about terms of K-ISMS control items used in this paper is described in appendix.) Among the 104 K-ISMS certification assessment items, the frequent itemset whose value was higher than the minimum support was created. A total of 825 rules were found using the brute-force method. When a number of fault items included in these rules were analyzed, 58 one-itemsets, 494 two-itemsets, 312 three-itemsets, and 20 four-itemsets were created. Figure 1 illustrates the K-ISMS fault items that occur frequently, listed in order of CL-1-1 (asset identification), AC-3-3 (access control), OS-2-2 (security system operation), AC-2-3 (access right review), and CC-1-1 (encryption policy establishment). If a number of items increase when generating a frequently occurring itemset candidate, the computation workload increases exponentially. To solve this problem, the "pruning" method [24] is used to make the unnecessary part concise. "Pruning" involves getting rid of the combination that does not satisfy the threshold criterion in each phase. The most universal pruning method is MSP (minimum support pruning). In other words, if support of the itemset combination is smaller than the threshold value, the item is no longer added. To remove the duplicated association rule, 307 association rule candidate groups are created through support-based pruning. Association Rule Analysis. In the stage of support analysis, a 10% support of a certain rule ( → ) means that the ratio of the rule followed by the rule in question is 10% among all faults. Figure 2(a) shows a support distribution graph of all fault items. The average of support is 13.7%, and 2 rules have more than 30% support value. On the other hand, 19 rules have over 20% support value and 286 rules have over 10% support value. (We used arules package in R tools [19,25].) Support is designed to measure how frequently those two faults occur among all transactions, whereas confidence implies that the possibility of fault " " occurrence is 30% if " " has occurred when the confidence of a certain rule ( → ) is 30%. Figure 2(b) shows a confidence distribution graph of all fault items. The average of confidence is 51%; 11 rules have more than 80% confidence and 98 rules have over 60% confidence and 160 rules have over 50% confidence. Figure 3 shows a visualization of the correlation among fault items that were found by the measure criterion (support, confidence, and lift) of association analysis. Table 1 shows the top 5 association rules sorted by the measure of support value. Number 44 of rule can be analyzed as follows. The support level is 34.2%, which is the ratio of finding a fault in the CL-1-1 and OS-2-2 control items at the same time. There is 59% probability that a fault occurs in the OS-2-2 control item if a fault occurs in the CL-1-1 control item. In addition, since the lift is over 1, which shows a correlation between the two control items, the correlation of the association rule is strong. On the other hand, Number 443 of rule is the association rule with low correlation because the lift is under 1, even though the support and confidence of this rule are 30.2% and 52.2%, respectively. Table 2 shows the top 5 association rules sorted by the measure of confidence. Number 40 of rule can be analyzed as follows. The ratio that a fault occurs in control items CC-2-1, DR-2-1, and AC-2-3 at the same time "support" is low (10.5%). Note, however, International Journal of Distributed Sensor Networks that the ratio of a fault occurring in control item AC-2-3-when fault occurs in control items AC2-2 and OS-2-2 "confidence"-is 100%. In addition, since the lift of this rule is over 1, the correlation of the association rule seems strong rule. Results and Discussion We have performed the process of figuring out the association rule that has the predefined support and confidence threshold value, in order to carry out relation analysis among K-ISMS faults. The first process is "frequent itemset finding, " which finds the itemset that satisfies the support threshold value only. The other process is "association rule generation" that adopts the rule satisfying the confidence threshold value only among association rules, which were created from the found frequent itemsets. Table 3 shows the summary of strong association rules within the range of the minimum support and minimum confidence. However, all strong association rules are not always useful. The support-confidence framework can induce a rule → as an interesting rule, even though occurrence actually does not affect occurrence. To solve this issue, interest analysis is required that indicates the level of rules' correlation. This paper selects 43 association rules by applying the following three conditions (see Table 4): (1) minimum support > 30% and minimum confidence > 50% and lift > 1, (2) minimum support > 20% and minimum confidence > 30% and lift > 1, (3) minimum support > 10% and minimum confidence > 80% and lift > 1. From the result of analysis, we can forecast that the association of fault occurrence among control items is high, as the rule {CL-1-1 (Information asset identification) ⇒ OS-2-2 (Security system operation)} is the one that has over 30% support level and 50% confidence level. The "information asset identification" is a control item that should classify and identify all information assets of the organization. A fault occurs, if those information assets are not identified periodically, or some assets are skipped. In other words, if a fault is found in "information asset identification, " there is a high possibility of fault occurrence in "security system operation. " The "security system operation" fault refers to the case that the security system operating procedure is not complied, or the blocking rule management log of the security system (e.g., firewall) is not available or lost. There were two rules that have over 20% support level and 70% confidence level. The first rule was {PH-1-4 (Access control of physical area), ⇒ AC-3-3 (User password management)} and the second rule was {AC-4-6 (access control of internet connection) ⇒ CL-1-1 (Information asset identification)}. The first rule-"Access control of physical" control item, refers to the requirement that only the authorized person should be allowed to access the major systems inside the security area, and the access log should be reviewed periodically. A fault occurs, if access control of the outsiders is not sufficient, or International Journal of Distributed Sensor Networks 7 Test and maintenance (C)DR-2-2 the mobile device (e.g., USB) can be brought to it. In addition, the fault of the "user password management" control item occurs when the password of major systems is not changed periodically, or the password use requirements are not met. Therefore, if the enterprise does not perform proper "access control" on major facilities and systems, there is a possibility that the "user password management" of major systems (e.g., server, network) can also be insufficient. Conclusions In this study, we used the association mining applied with the "Apriori" algorithm in order to analyze the correlation among K-ISMS faults and could find 43 association rules. The result of this study suggests having a high correlation among faults as if the organization identifies and manages their information asset carelessly, then it can also affect security system operation. Therefore, the result of those association rule may be referred to as the useful information for decision-making of organization's security activities and can be can be utilized as a guide to the assessment method during K-ISMS certification assessment. If any fault occurs among K-ISMS certification criteria, those items related to the association rule can be checked intensively. Also, it can be a guidance of analyzing the level of the Plan-Do-Check-Act activity (organization's security management phase) from the perspective of the correlation among faults. However, finding a useful rule can be different according to the size of the data set because the adoption of the useful association rule depends on the occurrence frequency of the analysis data. Therefore, we need various studies of K-ISMS fault analysis such as association in accordance with the scope of organization's certification (a number of employees and system). Based on the association rule results obtained in this study, decision-making tree analysis to forecast the fault status, and fault factor analysis using the structural equation model will be studied as a subsequent study. Because the paradigm of IT environment is changing from conventional PC and mobile to IoT, new approach is needed in terms of range of protection targets, characteristics of protection targets, and protection subjects to strengthen the security level of IoT continuously. In other words, the protection target should be expanded from the existing PC and mobile devices to all objects such as home appliances, automobiles, and medical supplies. It is also necessary to break away from the conventional method of protection with separate security system and software implementation and interface to establish the security policy, procedure, and standard to control and manage efficiently the administrative security, physical security, and technical security. Moreover, there is a need for the application of information security management system suitable for the IoT environment to maintain the continuous security level of IoT services and prevent the spread of risk of intrusion incidents, including the identification of key assets to be protected and threats as well as assessment of the current security level to establish policies for coping with threats.
4,954.6
2015-09-01T00:00:00.000
[ "Computer Science" ]
Factors Contributing to Low Productivity and Food Insecurity in Bungoma County, Kenya Food is a basic necessity of life. It is a basic means of sustenance and key for healthy and productive life. If Kenya is to continue to cut down on health costs and compete in a global economy, it should ensure adequate food security and nutrition within households. Food insecurity within households is a risk to people’s livelihoods. If not addressed in good time it could result into a disaster that will require foreign intervention for that affected community. The economic development of any nation is dependent on the productive capacity of human resources which is however a function of how well fed they are. Poor farmers have little or no access to credit, particularly short-term seasonal credit for farming Audsley et al. [1]. Under such circumstances, households lack economic capacity and therefore are at a risk of being vulnerable to food insecurity. Crucial information on the type of interventions that can be most effective in increasing productivity, reducing hunger, targeting the most needy, informing preparedness and developing contingencies is lacking in most communities in Kenya Lautze et al. [2]. Problem Statement Background Food is a basic necessity of life. It is a basic means of sustenance and key for healthy and productive life. If Kenya is to continue to cut down on health costs and compete in a global economy, it should ensure adequate food security and nutrition within households. Food insecurity within households is a risk to people's livelihoods. If not addressed in good time it could result into a disaster that will require foreign intervention for that affected community. The economic development of any nation is dependent on the productive capacity of human resources which is however a function of how well fed they are. Poor farmers have little or no access to credit, particularly short-term seasonal credit for farming Audsley et al. [1]. Under such circumstances, households lack economic capacity and therefore are at a risk of being vulnerable to food insecurity. Crucial information on the type of interventions that can be most effective in increasing productivity, reducing hunger, targeting the most needy, informing preparedness and developing contingencies is lacking in most communities in Kenya Lautze et al. [2]. Problem Statement Available literature indicates that Bungoma County is food insecure and also records a poverty index of 52.9% compared to the National index of 46%, while the food poverty stands at 43% KNBS [3]. There is documentary evidence that Bungoma County has many stakeholders dealing with food security issues being led by the County Government GOK [4]. This would give an impression high production and food sufficiency at household level but it is not the case. Food situation reports dating way back to 2011, show insufficient food stocks among households in Kenya GOK [5]. Records of studies done in Bungoma county revealed household food insecurity NALEP [6], Muyesu [7], KARI [8] and Ndienya et al. [9]. Many families in Bungoma County take one meal a day, in contrast to the recommended three meals per day UNICEF [10]. Due to this controversy, the study was set up with the objective to examine factors that led to low productivity within households, making them vulnerable to food insecurity despite the County's interventions. Objective: The objective of this study was to examine physical, economic, environmental and social factors that led to low productivity and made households vulnerable to food insecurity in Bungoma County, Kenya Contribution to the Field: The study will give recommendations to guide policy makers on issues of food security. This paper contributes to the knowledge bank important for scholars. It is arguable that findings of this study with a focus on Bungoma County will inform similar studies in other counties in the entire country. Significance of Work: The outcome of the study will guide decision-makers at all levels in formulating food policies. Reliable and timely information on the incidence and causes of low productivity, food insecurity and malnutrition will be documented. Recommendations from the study is expected to assist households understand the crucial factors of production and risks of food insecurity and be able to appropriately plan their farming schedules. Materials: The study targeted household heads whose food security depended on farming. Community groups (women groups, men groups, youth groups and self-help groups) were targeted for focus group discussions. Opinion leaders, Non-Governmental Organizations, Community Based Organizations/Non-State actors, Faith Based Organizations and Government officials were selected as key informants. Setting: This study was done in four sub-counties of Bungoma County; they included Bumula, Bungoma West, Mt. Elgon and Bungoma North. The County is located on the Southern slopes of Mt. Elgon, and lies between latitude 0 0 28 1 and latitude 1 0 30 1 North of the equator, and longitude 34 0 20 1 East and 35 0 15 1 East of the Greenwich Meridian. Procedure: The research work adopted a cross-sectional survey research design and the variables examined were physical, environmental, social and economic factors. The population for the study was household heads, key informants and formal organized groups. A cluster (multi-stage random) sample size of 384 households-calculated using a formula from the book of Mugenda [11] was selected from household's population of 1,553,655 KNBS [12]. This study utilized both primary data collected from the field and secondary data from archival sources. Data was collected using semi-structured questionnaires administered to the selected household heads. Four (4) Focus Group Discussions were held and each group was composed of eight to twelve (8-12) members of mixed gender. Twenty (20) key informants purposely chosen from opinion leaders, Government departments, Faith based organizations, Non-governmental organizations were interviewed. More information was obtained from observation checklists [13,14]. Analyses: The quantitative data were organized, coded and edited by a process called data cleaning Punch [15][16][17]. The statistical package for social sciences (SPSS) was used to analyze data. Two analyses were made. Descriptive analyses was done by use of means, modes, standard deviations, variance, percentages, and frequencies) while inferential analyses was by use of chi-square test and Spearman rank order correlation analysis. Physical Factors and Vulnerabilities to Food Insecurity Various physical factors were identified as contributors to low productivity. These included small land size for farming, non use of fertilizer and certified seeds. The soil was infertile and this led to low yields, poor infrastructure, and disorganized marketing system [18]. Chi-square tests revealed a significant relationship between physical factors and production levels in the county (p-value = 0.035; < 0.05). It was also established that markets were few and far apart from farmers. The distribution of farm produce outlet included; farm gate level, neighbors, local or open markets and others. International markets fetches better prizes but unfortunately, all households interviewed had no idea of existence of export market. Very little produce was sold to supermarkets, meaning low incomes that could not enable farmers to purchase certified seeds or other food items not produced on the farm [19,20]. The seasonal roads as well as lack of means of transport made farmers to sell their produce at low prices on the farm. Besides this, farmers did not have government permits and certificates of operation to enable them penetrate the supermarkets in the country. Economic Factors In order to earn a living and be food secure households engaged in the following activities: Dairy production, maize farming, horticulture, banana farming and petty trade. Most of the households depended on farming with some shifting from subsistence to business farming to raise income. Similar views were found by Makhanu et al. [21] working in the same region; this shift in attitude to do farming as a business reflects current trends of blending specialization and diversification to reap optimal benefits by smallholder farmers. This was also observed by similar studies as captured by government policy initiatives in Agriculture GOK [22]. The economic factors that contributed to low productivity and food insecurity were listed as high levels of poverty and high cost of farm inputs. Due to high cost of farm inputs like fertilizers and certified seeds, majority of the farmers planted uncertified maize seeds (number name) and without fertilizer. As a result of planting uncertified seeds, the cereal yields were so low that it hardly sustained a household for three months after harvest. Horticulture farming was affected due to non-use of chemicals to control pests and disease [23][24][25]. Environmental Factors Environmental factors contributing to food insecurity were found to be natural calamities like drought, floods, hailstones and inadequate / unreliable rainfall. Crops on farms were at the risk of natural calamities like hail stones [26][27][28]. Too much rainfall led to floods which damaged both properties and livelihoods. Human activities like cutting of trees led to deforestation and this resulted into soil erosion. Erosion made soil unproductive as the soil nutrients are washed downstream, hence food insecurity for such households. Other factors established were pests and disease outbreaks which were a risk to both crops and animals [29]. This finding is comparable to a study done by Ahmed et al. [30], which revealed that increasing vulnerable environmental conditions such as diminished biodiversity, soil degradation or growing water scarcity can easily threaten food security for people dependent on the products of the land, forests, pastures, and marine environments for their livelihoods. These findings also support Kenya Government recommendations for adapting to climate change like; conservation farming, right land use practices that reduce emissions of greenhouse gases GOK [31]. Social Factors A key social factor contributing to vulnerability was the gender of the household head. The study established that 80% households were headed by men while 20% were women. All decisions in the household were made by men. In many cases men were found to be the cause of food disasters in their own homes. Women had no say in decision making concerning food issues where men were heads [32][33][34]. Men made final decisions in relation to land allocation for different crops, when to market farm produce and the use of cash from sale of farm produce. The study further revealed that women were in the same category with children, so they could not be allowed to make final decisions in the households. One Man, during focus group discussion quoted the Holy Bible (Genesis 2:18) [35] where he said 'women were made to assist men), therefore they should always be subordinates to us'. This notion made households vulnerable to food insecurity as productive ideas from women may not be adopted. The findings were similar to the study done by Lautze et al. [2] who found out that positive traditional value, customs and ideological beliefs contributed to social vulnerability of any given household. Focus group discussions recorded that culture prohibited working on the farm during bereavement and this contributed to low productivity incase funeral occurred during planting season. Farming activities may be stopped for periods exceeding three weeks. This can be crucial as even a small period of time lost affects agricultural production Africa Progress Report [36], Delgado [37]. Laziness, idleness among the youth and theft of farm produce while in the farm and store were mentioned as contributing factors to food insecurity. Key informants quoted lack of knowledge on production and storage as factors making households vulnerable to food insecurity. This was also revealed by household interview results, where 61% of the household heads only attained primary level of education, meaning they were limited in knowledge and the level of understanding of new farming technologies [38,39]. Conclusion and Recommendations Farm production by Households in Bungoma County were found to be low and hence making them vulnerable to food insecurity because of the following factors; physical (Poor road networks and markets), economic (poverty and high cost of farm inputs), environmental (climate variability and deforestation) and Social (cultural belief and negative attitude). The study recommended that the County Government of Bungoma should subsidize costs of farm inputs and make it accessible to farmers, the road network system should be improved to ease transportation to access markets for farm produce, people should be sensitize on positive culture practices and attitude change to allow both gender participation on issues of food security.
2,822.4
2017-11-01T00:00:00.000
[ "Economics", "Environmental Science" ]
Empirical evaluation of professional traineeships for young people up to 30 years of age In this article, we evaluate ‘Professional traineeships for young people up to 30 years’, an active labour market policy measure implemented in the Czech Republic. Professional traineeships were one of the possibilities for suitable offer to young people within Youth Guarantee in the Czech Republic in 2014 and 2015. First, we conducted a process evaluation (document analysis and interviews) to uncover the design and implementation aspects of the program. Next, we followed the counterfactual impact evaluation approach towards the estimate of returns to unemployment (competing risk analysis) based on individual administration data from public employment services. We have found that professional traineeships were successful in attracting the interest of both young people and employers. Mainly young people with middle and high level education have entered the program. Most of them have been provided with on-the-job subsidies in the private sector. When considering the impact of the program on the unemployment of participants and a control group, it was shown that after two years, the measure was effective only for young people with long pre-program Employment Office registration. When we consider the reasons for leaving Employment Office registration, the measure seems to be more effective, since many young people in the control group left the Employment Office register in favour of options that were outside of the labour market. INTRODUCTION Across Europe, governments are implementing social policy measures supporting the school to work transition of young people with the aim of helping them to integrate into the labour market. All EU countries have also committed to the implementation of the Youth Guarantee in a Council Recommendation of April 2013. The Youth Guarantee aims to offer a good continued education, apprenticeship, training or employment opportunity to all unemployed young people within four months of their leaving employment or education (Escudero & López Mourelo, 2015). It represents a good opportunity, in particular, for some countries, to rethink and reorganise active labour market policies targeting young people. Furthermore, its design and implementation can also help identify the linkages between the labour market and education systems, as well as between the labour market and welfare systems, which need to be improved in order to ensure smoother transitions into the labour market (Bussi, 2014). In this article, we discuss the planning, implementation and impact evaluation of the active labour market policy (ALMP) measure 'Professional Traineeships for Young People up to 30 years',2 realised in the Czech Republic. To our knowledge, this is the first attempt to evaluate this program, which in the Czech context is a rather important ALMP measure for young people. In the official documents, this project was used as a prime example of projects realised within the Youth Guarantee (YG; Employment Office, 2014a, 2015a, European Commission, 2014. Employment Office directive No. 24/2014 about the realisation of YG mentioned professional traineeships as one of possibilities for suitable offer to young people. A response to the YG monitoring questionnaire (the Member States' response to 2013 Council Recommendation on establishing YG) stated: 'Within the YG scheme, the Professional traineeship plays an important role, as it represents an offer, i.e., an activity with many parameters of "quality offer", where the participant gains work skills and is remunerated for them, and the mentor performs the training.' (MLSA, 2016b: 9). The Ministry of Labour and Social Affairs stated that professional traineeships helped a total of 11,000 young people gain their first work experience through financial support for their employers (MLSA, 2016a). There is also the follow-up project 'Guarantee for Youth', implemented in the period 2016-2020 (for specific conditions, see Employment Office directive No. 17/2015). We addressed the two following research questions in this article: 'How were professional traineeships planned and implemented in 2014-2015?' 'What impacts did the professional traineeships have on Employment Office registration of participants enlisted in the program in 2014?' The first question covers information about the program goals, plans and its implementation. The second question covers the outcomes of the program, which we measure as the state and duration of the Employment Office registration and declared reasons for leaving the Employment Office register. We used the counterfactual impact evaluation approach to estimate the level of leaving and returning to the Employment Office register (competing risk analysis). The most obvious limitation of our analysis is that we have data only from the Employment Office registers and cannot track the subsequent employment or wages of the program participants, or those of the control group. Such data exist but are not available to us due to the strict legal limitations set in the Czech Republic for the protection of data. To answer the above-defined questions, we combined methods typical for monitoring/process evaluation with methods typical for impact evaluation. This approach is in coherence with the perspective proposed by Rossi et al. (2004) that process evaluation is an indispensable adjunct to impact assessment so as to avoid the problem of 'black box evaluation'. We assessed the project with information from various sources, including publicly available press reports and evaluation reports, aggregate data provided by the Ministry of Labour and Social Affairs, data from internal monitoring reports provided by the regional Employment Offices, and the administrative individualised data of jobseekers provided by a national data provider (OKSystem company). We also quote some reflections on the program from interviews with Employment Office workers at both the national and local levels. The rest of the article is organized into the following sections: first, on the basis of the theoretical arguments and previous empirical evidence, we discuss the effects of ALMPs for young people, especially the effects of subsidised workplaces in the private sector. Then, we provide information about professional traineeships, including the description of project goals, targeting and activities, information about how the project was implemented, and feedback from Employment Office staff. Next, we briefly discuss the impact evaluation methodology and present its results. THEORETICAL DISCUSSION AND REVIEW OF PREVIOUS FINDINGS ABOUT THE FUNCTIONING OF SUBSIDIES IN THE PRIVATE SECTOR AND IMPACTS OF ALMP MEASURES FOR YOUNG PEOPLE We understand professional traineeship as a specific sub-type of the active labour market policy program that is aimed at young people and combines subsidy in the private sector with training on the job. Zimmermann et al. (2013) describe other similar programs in Germany and Spain. The key aspect of the program is the pay subsidy. The goals and functions identified in connection to the subsidies in private sector employment may include the following (Betcherman, Olivas, & Dar, 2004;Brown & Koettl, 2012;Kuddo, 2009;Martin & Grubb, 2001): helping to match young people to jobs; motivating employers to create (additional) jobs helping individuals to enter the labour market and to keep in contact with the work world; raising their future chances on the labour market (transition effect) enhancing the motivation and skills of the participants; allowing them to gain work experience supporting the disadvantaged and long-term unemployed with jobs, even at the expense of the less disadvantaged (short-term) unemployed possibly temporarily enhancing the wage and job quality in the initial phase of employment (which may enhance motivation to take such a job) possibly reducing work on the informal labour market Measures such as professional traineeships may be particularly useful in the countries of Central and Eastern Europe, where many places of dual vocational training have in the last thirty years been replaced with general education or school-based vocational training. This has also been caused by the low acceptance of vocational education by young people and their parents (see Zimmermann et al. 2013). Professional traineeships should provide young people with systematic training, work praxis, and allow for the shifting of employer preferences towards recent graduates rather than older young workers. According to Caliendo, Künn, andSchmidl (2011), Card, Kluve, andWeber (2015), Kluve (2006), and Vooren, Haelermans, Groot, & Maassen van den Brink (2017), there are private subsidies in the group of more effective ALMP programs. In the Czech Republic, Kopečná (2016) evaluated another similar measure of the Czech Youth Guarantee: 'Internships for young job seekers'. Students (mostly from universities) participated in these internships, which were organised in the Czech Republic by the Fund of Further Education and usually lasted for 2-4 months. Kopečná (2016) found positive impacts of these internships on the economic state and incomes of former participants one and a half years after their registration in the program. Nevertheless, it is supposed that programs such as professional traineeships might have substantial dead weight3 or substitution effect, with only a small net gain in employment (Martin & Grubb, 2001;van Ours, 2004). Wunsch and Lechner (2007) argue that the program participants' prospects are the key factors for explaining program effectiveness, because of the different risks of lock-in effect in interaction with different job prospects of program participants that would be possible without the program. Labour market conditions influence the effects of the program. The lock-in effect is not of much relevance for the young participants of professional traineeships, because staying for at least some time at the workplace where the job was subsidized is a preferred characteristic of the program. The efficiency of invested money is not the main consideration here (see e.g. Brown & Koettl, 2012). Caliendo et al. (2011) and Wunsch and Lechner (2007) have found that the positive effect of subsidised jobs is that participants largely remain in the job even after the program ends. However, this effect may decline over time (see Potluka et al. 2016). It is important to see whether the participants returned to the Employment Office register after the end of the subsidized job. The question of program targeting is very specific for professional traineeships, because there are almost no a priori targeting restrictions (except age and unemployment criteria), and self-selection by participants or employers is a probable condition of program placement. While the impact of the program may be dependent on the selection of the program participants, there may be a trade-off between improving the program impact through stricter targeting, on the one hand, and stigmatising participants, on the other, which may reduce the employers' willingness to participate in the program (Martin & Grubb, 2001). When the program participation is voluntary, the attractiveness of taking part in the program for the prospective participants may be dependent on the characteristics of the program compared to that of the other options available. The conditions in professional traineeships were more favourable than in the other ALMP measures implemented in the Czech Republic (see below). The literature considering the impacts of ALMP measures targeted at young people, including both training and job subsidies, shows that such programs have generally been ineffective, or less effective (e.g., when comparing to older cohorts), or that only some of the programs have been effective (see Betcherman et al., 2004;Calmfors, Forslund, & Hemström, 2002;Card et al., 2009Card et al., , 2015Kluve, 2006;Martin & Grubb, 2001; a notable exception is Caliendo et al., 2011). However, we still lack a plausible explanation for such less effective results. One explanation may lie in the targeting of the program (length of unemployment, level of education, or various disadvantages of the young participants) -the participants may be in too good or too bad a situation to benefit. Another explanation is that young people have a less stable situation and tend to experience higher labour market mobility (Caliendo & Schmidl, 2016). DESIGN OF PROFESSIONAL TRAINEESHIPS: GOALS, TARGETING AND MAIN PROJECT ACTIVITIES Employment Office headquarters was the principle agent in establishing professional traineeships. The social problem that the project addressed was the lack of experience of the young people, which was perceived as a crucial barrier to labour market integration (Employment Office, 2014b). The goal of the project was the inclusion of young people into the labour market. It was supposed to be implemented through the support of a stable job, which would prevent the participant from returning to the Employment Office register. Some designers of the measures expected that a proportion of the employers would continue to employ the young people even after the end of the project (ESF, 2014). At the individual level, the project should also help the young people through activation and motivation (e.g., with counselling) and in supporting their employability by improving their knowledge and competences. The activities should have been tailored to the needs of the young people. The Employment Office considered this project to be a good option especially in the period of culminating economic recession (in 2012), when the project was planned (information from an interview at the Employment Office headquarters). Many concrete conditions for the realisation of the project were defined in the directives of the General Manager of the Employment Office No. 11/2013and No. 16/2014(Employment Office 2013b. The documentation for all 14 regional individual projects defined the main goals in a similar way; however, some of the conditions of these projects could have been different within the space pre-defined by the directives. The project was targeted at young people (primary target group) and employers. The primary target group was defined as young people up to 30 years old who had been registered at the Employment Office for at least four months (some exceptions were possible) and had less than two years of work experience since finishing their formal education. At the initial phase of the project, those people with longer Employment Office registration were preferred (see Employment Office directive No. 11/2013). In the later phase, the length of registration was no longer relevant for participation in the project (Employment Office directive No. 16/2014, Employment Office, 2014d). There were no limits on the level of education of the young people (Employment Office, 2015d). The employers of the professional traineeship participants could have been any organisation, including those in the nonprofit sector. According to the Employment Office directive No. 16/2014, more than 50% of the subsidised jobs should have been implemented in private or for-profit organisations. State institutions and organisations owned by the state were excluded from taking part, due to the Czech employment law (Employment Office, 2016b). Employers could seek support for their own candidates or they could accept candidates provided by the Employment Office. A specific Employment Office committee was to have assessed the young candidates to decide whether they should be given the option of taking part in traineeship and/or mentorship. Employment Office directive No. 11/2013 broadly defined the possible project activities. The following descriptions of the activities refer to the general model of the projects -the specific conditions in the projects could have been and were different, depending on the decisions of the Employment Office officials in the local contexts. Individual, group and career counselling: These activities included work orientation, activation program, work diagnostics and career counselling (including activities aimed at encouraging a return to education). Training: Requalification training was possible within the professional traineeships when there was a special need for it and it was beneficial for both contracting sides (Employment Office, 2014c). Training was to be realised before the placement of the young worker to prepare him/her for the position. Creation of traineeships (subsidised jobs): This was the most important activity in the project. The measure was to provide jobs for young people and training on the job. Usually full-time work positions were to be created; part-time employment was possible in specific cases (e.g., for those with parental responsibilities), but not less than 20 hours a week. The definition in the Employment Office directives No. 11/2013, No. 16/2014and No. 17/2015 requiring a correlation between the job and the previous education of the young person was rather vague: 'Job seekers participating in professional traineeships must be placed in workplaces with at least minimal potential of career development and with a specific employer that corresponds to the level of their achieved education.' Both fixed-term and permanent contracts were possible, but work positions with work contracts of unlimited duration were preferred (Employment Office, 2014b). The financial support for young people was provided for a minimum of 6 to a maximum of 12 months (usually it should have been 12 months). The minimum duration was reduced to three months in 2015 (the last year of the project). The maximum level of support was 24,000 Czech crowns (CZK) per person per month, including the employer's contributions for social insurance. After the end of the training, the employers were to provide trainees with a certificate of traineeship. In the case that the trainees were not intending to stay with the company, the current employers were also to provide recommendations for other prospective employers. Mentorship: The role of the mentor was seen in the idea of easing the young people into the job according to a training plan. Mentors had to be stable employees of private firms (beneficiaries) who had worked in the firm for at least 3 months. The mentor could have been paid from the project for a minimum of four to a maximum of seven months, according to the appropriate proportion of the normal salary and the time spent on the project. The greatest effort of the mentor was expected in the first three months of the program (Employment Office directive 11/2013; Employment Office 2013a, 2014b). Work on trial (defined by the Employment Office directive 15/2014) was used only in the Ústecký Region. The goal of this measure is to assess the readiness of young people to work in a subsidised job. It was for a maximum of three months and with different (lower) financing conditions from the other subsidised jobs. Support activities: The projects could cover travel costs, health screening fees before job placement and payment for food during project activities (Employment Office, 2016b, 2016c). IMPLEMENTATION OF THE PROJECT The professional traineeship project was financed from ESF (Operational Programme Human Resources and Employment) and cofinanced from the national budget (Employment Office, 2014b). Although the main stakeholder in the project was the Employment Office, there were sub-contracted partners in some regions, e.g., for diagnostics of skills and training activities for the participants. The program was mainly implemented as 'Regional individual projects', which meant that 14 regional branches of the Employment Office realised 14 separate projects . Not all such activities were mandatory, nor were they systematically used in all regions. Some of them were used in only one region (see Employment Office, 2016c, 2016e, 2016g and Table 1). The numbers of project participants and money allocations were very different in the regions because of the various populations and scope of youth unemployment. Most of the participants were from regions that have the most severe economic problems, such as Ústecký, Jihomoravský and Moravskoslezský. The enormous interest of both young people and employers led to a gradual but great increase in the magnitude of the project in all regions (Employment Office, 2016g;MLSA, 2015). Data from the Pardubický Region showed that mainly small business employers and self-employed (free-lance) people participated in the project (Employment Office, 2015b).4 Available data from the regional Employment Offices showed that participants from the target group often had a secondary or tertiary education (Employment Office, 2015b. According to the information from some of the Employment Office reports, the participants were actually often graduates from high schools or universities in economics, business, trade, management, electrotechnology, IT, pedagogy, construction, engineering and social sciences (Employment Office, 2014b, 2015b). Since Kopečná (2016) found similar targeting in 'Internships for young job seekers', we can conclude that these results of targeting were not an exception within the Czech Youth Guarantee measures. Data about the real correspondence of fields of jobs to previous fields of education confirm that there was the expected correspondence between the level and field of education and the internship position.5 We provide an overview of the implemented project activities based on the aggregate statistics of the regional Employment Offices in Table 1. Despite the lack of data (and some uncertainty about the methodological consistency of data from various data sources), we can conclude that the projects were implemented quite differently from region to region. The differences were mainly in the voluntary activities -this reflects the different strategies of the regional Employment Offices, as well as the different reflections of the needs of the participants. Training courses and mentorship agreements were (much) less used than was initially expected, due to the low interest of young people and employers. Training was rather rare in most of the projects (see Employment Office, 2016c; Table 1). According to the information from an interview at the Employment Office headquarters, a mentor was contracted in about 30% of traineeships, depending on how demanding the job was. This corresponds to available data. Low interest in mentorship agreements reflected the immediate need of employers for already qualified workers and their perception that there was a low level of mentorship remuneration compared to the administrative demands it entailed (Employment Office, 2016i -Plzeňský, Praha, Vysočina, Liberec and Pardubický regions). The maximum hourly wage (including social contributions) of the mentor was raised from the initial 151 CZK per hour to 165 CKZ in 2014 and then 176 CZK in 2015. There was a set maximum number of hours per week resulting in the reimbursement of around CZK 5000 per month (Employment Office, 2016c, Employment Office, 2016i). (2015c, 2015d, 2016c, 2016d, 2016f, 2016g, 2016h, 2016i and project The provision of traineeships was a key and the most widespread activity in the project (Employment Office, 2016c, 2016e). The subsidised jobs were implemented mainly in 2014 (the key year of the project). According to a national report of the Employment Office (Employment Office, 2016a), the project created 8,580 traineeship agreements and 2,374 agreements on mentorship. Basic information about the scope of traineeships (at the regional level) is provided in Table 1. In rare cases, some participants were supported twice when they were not satisfied with their first job and ended it prematurely (Employment Office, 2016e). The actual maximum amounts of financial support offered to the employers by the Employment Office varied from region to region (see Table 1). Employment Office workers at the regional level confirmed in interviews that the financial support in professional traineeships was substantially higher than the level of support in similar national projects (13,000 CZK). The real duration of internship was usually up to one year. It was prolonged in some cases to allow the employers to become better acquainted with the particular young people and to prevent the participants from returning to Employment Office registration (Employment Office, 2016i). The overall share of traineeship participants who did not finish the planned duration of the traineeship was estimated to be 15-25%: 18% in Středočeský region, 20.7% in Plzeňský region, 22.8% in Královéhradecký region and 18% in Vysočina region for people with a 12-month long traineeship (Employment Office, 2016i).6 The money that was unintentionally 'saved' due to the premature ending of internships or mentorships was partially used to support other internships, or not spent (Employment Office, 2016i). According to a representative of Employment Office Headquarters, the professional traineeship program was considered a successful project. This was explained in the following statements: A) The project has included some innovative elements like mentoring and a more systematic and complex approach to young unemployed people. B) There were positive effects evidenced in providing job experience opportunities for the young (in particular, during the times of recession, when it was otherwise difficult to find work) and in building their skills and motivation. C) There was growing interest from the start of the project from employers as well as the young people. While these statements are presented here just as opinions, A and C are in correspondence with the data presented in this article. However, there were also problems. In some of the regions, sometimes it was difficult to find suitable, capable and motivated participants (Employment Office, 2016i -Moravskoslezský region). During the interviews, Employment Office workers at both national and local levels recognised another important problem: 'the offer' (in this case, mainly a subsidised job) was not sufficient for some of the participants to overcome their cumulated handicaps and various personal, life and labour market problems. These young people often prematurely ended the ALMP programs of their own accord, even though not fulfilling the obligations during the program could be classified as 'thwarting of cooperation' and could be sanctioned by expulsion from the Employment Office register. This was because they, for example, found it difficult to regularly go to work, or the job demands were too high for them. An estimated 10% of participants fall into this category for the professional traineeship project. METHODOLOGY OF IMPACT EVALUATION FOR PROFESSIONAL TRAINEESHIPS We used individualised administrative data and applied a quasi-experimental approach to assess the short-(up to 12 month) and medium-term (12-24 months) impacts of the project (as defined by Card et al., 2009). We did multiple imputations to fill the missing data; we used propensity score matching to solve the contra factual question and then we used the cumulative incidence function for the presentation of our results. For calculations of propensity score matching, we used a 'PSMatching' plug-in for IBM SPSS, developed by Felix Thoemmes et al. (2012). For the cumulative incidence function, we used the R program and 'cmprsk' package (Gray, 2015), with a plug-in by Scrucca, Santucci & Aversa (2007). First, because we had some missing data in the database, we used multiple imputations to estimate the missing values of variables by using the other variables included in the model. We had three variables with some missing data (e, h and i -see below), 5.5% of missing values in these three variables and 15% of cases that have at least some missing data. As 'PSMatching' is unable to deal with missing data, we solved the problem by using Thoemmes' (2012) recommended multiple imputation. Hill (2004) and Mitra and Reiter (2012) describe the rationale behind the combination of multiple imputation and propensity score matching, and define two possible approaches ('within' and 'across'). Using the 'within' approach, we could match all 10 imputed datasets one by one and then later, average the results. Next, we have also done propensity score matching (see e.g. Rodríguez-Planas & Jacob, 2010) to create a control group of young people who were not program participants. We implemented the nearest neighbour matching without replacement, with a random matching order and a 0.15 calliper of standard deviation of the logit of the propensity score. We have chosen nearest neighbour matching because it suits our large dataset well and allows for easy combination with exact matching of specific variables. We used a calliper to avoid poor matches and random matching order to avoid the dependence of estimates on the order in which observations are matched (see Caliendo & Kopeinig, 2005;Stuart, 2010). The dependability of results on concrete cases was further diminished by often matching the concrete case in a treatment group within our 10 models with different cases in the control group. The average differences in outcomes between these ten models are minimal. We used the standardised differences approach (Thoemmes, 2012) and performed joint Hotelling T-squared tests as balancing tests. After matching, none of these diagnostics showed any substantial differences in either of the groups of paired data (see matching quality diagnostics in the appendix). DATA AND VARIABLES We used data about the program participants from the Czech Republic employment services (provided by the OKSystem data manager). Only those program participants who started the professional traineeship program in 2014 have been included in this evaluation. Young people in the control group were drawn from a pool of about 321,000 jobseekers in the same age category (15-30 years) as the program participants, who were in the Employment Office register but did not take part in any ALMP measure in 2014. Data about the Employment Office are saved in the system as specific dates of participation. We used these data about program outcomes in both continuous and discrete form, including the fact of being listed in the Employment Office register after certain fixed times and spells of Employment Office registration. We then combined this information with declared reasons for leaving the Employment Office register. We measured the impacts of the program on participants from the first day they took part in the program and the corresponding length of the pre-event Employment Office registration in the case of the non-project control group. For the paired professional traineeship participants and the control group, the status was tracked for 2014, 2015, 2016 and the first quarter of 2017 (for a minimum of 750 days). For descriptive data about the outcomes of the first unemployment spell, see Table 2. We used the following variables for propensity score matching computation: a) regions of the Employment Office at the regional level; b) unemployment rate at the local level; c) number of job offers available at the local level; d) gender; e) age; f ) health status; g) education level; h) field of education; i) ISCO class of preferred employment; j) length of previous Employment Office registration (former unemployment history); k) length of employment in the last three years before the start of the first registration in 2014; and l) previous self-employment status. Additionally, we used exact matching on m) categorised period of starting the first registration continuing until 2014; n) elapsed duration of previous unemployment spell: from Employment Office registration to the start of the program (or corresponding time point). We considered the inclusion of interaction and quadratic terms. The basic characteristics of all (before matching) and paired (after matching) program participants and young people in the control group are provided in Table 2. There are insignificant differences between the groups. We are not estimating the average treatment effect on the treated (ATT), but the partial effect of the measure on the paired cases (73.2% of all participants who started professional traineeships in 2014). Tab. 2: Information about the professional traineeship participants who started the program in 2014 and the control group. PROFESSIONAL TRAINEESHIPS RESULTS BASED ON 'OKPRÁCE' DATA In this section, we provide results of the professional traineeship program. According to OKPráce data, most program participants completed the professional traineeship program. However, 13.2% ended earlier to go to unsubsidised employment; 3.3% of the total were not able to finish the program, or they ended prematurely for reasons other than employment. Graph 1 shows the occurrence of program participants and young people in the control group in the Employment Office register in 30-day intervals for 750 days after the start of the program. Program participants often ended their Employment Office registration within a period of three months of the start of the program. Some project participants gained employment without needing subsidised jobs (see below). These results are valid for all three groups of program participants with different lengths of pre-program Employment Office registration (i.e., short-, medium-, long-term unemployment). Most of the young people stayed off the Employment Office register after the subsidy ended (with the subsidy not usually being longer than 12 months). The young people in the control group with a short-and mediumlength pre-program Employment Office registration were also able to leave the Employment Office register within 750 days. There is only a very small difference in the presence in the Employment Office register between the participants and control group (the initial difference due to the subsidy almost disappeared within 2 years). For the previously long-term unemployed, there is a more substantial difference in their occurrence in the Employment Office register during the observed period. Graph 1: Occurrence in employment register for professional traineeships and control group for three lengths of previous EO registration. Source: data 'OKPráce' (2014-2017. Notes: T -professional traineeships participants; C -control group; SH -pre-control event registration 0-90 days; ME -pre-control event registration 91-365 days; LO -pre-control event registration 366+ days. Table 3 show the main reasons for leaving the Employment Office register during the program. A total of 77.1% of the program participants left the Employment Office register to go to a subsidized workplace and 17.4% on average found work without a subsidy. The regions varied in the share of professional traineeship participants who found a job without a subsidy (over 30% in Liberecký, Plzeňský and Jihomoravský regions). Regional Employment Offices used different placement strategies, for example, in the Plzeňský region, right from the start, 133 people were employed outside the key activity subsidised workplace (Employment Office, 2014e). A little more than 50% of the young people in the control group found a job without a subsidy. About 10% of the people in the control group entered a subsidized workplace in the following years (2015-2017); 14.9% were expelled from the EO register; 13.8% left for unknown reasons; and 7.9% left for personal reasons. We used competing risk analysis (cumulative incidence function) to see the developments of the hazards of leaving the Employment Office register in time. Graph 2 (for the first measured Employment Office registration) shows that exits from the Employment Office register quickly cumulate during the first year after the start of the program. While 94.5% of program participants went from the first Employment Office registration to a job or subsidized employment, in the control group, it was merely 61.7% who were certain to leave for a job. This can mean that even young people in the control group (e.g., within the Czech context, those with a very good level of education) may be subject to various 'Yo-yo' transitions (see Walther 2006 for an explanation of the term). When considering the reasons for leaving the Employment Office register, the results of those involved in the professional traineeships seem to be more promising than those based only on the Employment Office registration. Data provided in The results in Table 4 show that during the 750 days, over 95% of young people left the Employment Office register in both the participant and control groups. In addition, there were very similar shares of people from among the professional traineeship participants (25.8%) and in the control group (23.3%) who returned to the Employment Office register. There were higher rates of program participants returning to the Employment Office register in the initial phase of the program (when possibly there was some problem from the start of the participation at the subsidised workplace) and in the 12 th and 13 th months of the program (probably immediately or soon after the financial subsidy ended). Aggregate data provided by the Employment Offices of some regions confirmed that about 15% of the participants returned to the Employment Office register immediately after the end of the subsidy. The share of young people who stayed with the same employer after the end of the subsidy varied greatly from region to region: between one-third and three-quarters (see Employment Office 2015c, 2015d, 2016d, 2016i). We wondered what happened to the young people after they first returned to the Employment Office registration after the start of the project (pre-control event) -these are the people in the even-numbered columns of Table 4. The results for the second Employment Office registration (see Table 3) showed that 63% of program participants left for unsubsidized job, compared to 53.8% in the control group. There are other reasons that are more prevalent in the control group, such as expulsion and unknown reasons for leaving the Employment Office register. We also show this result in Graph 3. The program participants were more often able to leave the Employment Office register for unsubsidized employment. The placement of program participants in subsidized workplaces during the second unemployment spell (about 10% of cases) included both repeated placements (when the first placement was not successful), which occurred more often, and less often, placements of people who had previously found jobs without a subsidy, but relatively quickly returned to the Employment Office register. Graph 3: Competing risks analysis of reasons for leaving second employment office registration. Notes: see Graph 2 DISCUSSION Results of our study are consistent with the results of studies presented in the review of the previous findings above. The study showed substantial differences between the effects measured with two different outcomes: a) 'Employment Office registration' and b) 'reasons for leaving Employment Office registration'. When taking Employment Office registration as the main indicator, it seems that the impact of the program was very small (except for long-term unemployment). It is well-known that 'deadweight effects' can plague the impact of private employment subsidies. Subsidies are not always needed for people with good a priori job prospects because many of them would quickly enter the labour market even without assistance (OECD, 2005). This can be accelerated by good economic conditions (as was the case in the Czech Republic in 2015 and 2016), which may increase the interest of employers in employing young people even without a subsidy. This is why ALMP programs tend to have better effects in economic recessions (Card et al. 2015;Forslund, Fredriksson, & Vikström, 2011). Nevertheless, when we take the outcome 'reasons for leaving Employment Office registration' into account, the effects of the professional traineeship measure on the employment of young people seem to be more positive. The main potential gain of the measure can be in avoiding yo-yo transitions at the beginning of the work career. Our assumptions about the effects are based on micro-economic evaluation. We cannot evaluate the macro-economic impacts of professional traineeships. Nevertheless, we can be somewhat optimistic here, because macro-economic evaluations of ALMPs often show positive effects on reducing unemployment (Martin, 2015). Furthermore, we cannot prove or falsify the complete nonexistence of spill-over and social interaction effects (see OECD, 2005). However, the implementation of traineeships due to low scope of program probably did not harm the work chances of the young people in the control group. CONCLUSION In this article, we have assessed one of the important measures of Youth Guarantee in the Czech Republic: 'Professional traineeships for young people up to 30 years'. The aim of the measure, which was part of Youth Guarantee, was to provide a 'good quality offer' relatively quickly after the Employment Office registration and provide work experience to young people. The measure was able to activate and motivate young people to work, even at the cost of some self-selection of participants into the program. Most of the young people who participated in professional traineeships: have been able to gain a foothold in the labour market have gained work experience have escaped long-term unemployment at the start of their careers The avoidance of long-term unemployment spells at the start is very important, because they can be especially detrimental for the prospects of young people (Caliendo & Schmidl, 2016). Although the Employment Office registration seems to be similar at first glance, it is obvious that the pathways of the young people in the control group were different (including more cases of economic inactivity and unknown fates). Targeting young long-term unemployed or those who are in some other way disadvantaged may further improve the impacts of the program. Nevertheless, there is the substantial risk that the program would not be able to overcome all the potential problems of multi-disadvantaged participants. The further research of professional traineeships can include measuring long-term impacts and regional differences in impacts, measuring impacts with better data from social security registers and provide insight into the macro-economic effects of the program.
9,111.2
2018-12-01T00:00:00.000
[ "Economics" ]
Unreduced Megagametophyte Production in Lemon Occurs via Three Meiotic Mechanisms, Predominantly Second-Division Restitution Unreduced (2n) gametes have played a pivotal role in polyploid plant evolution and are useful for sexual polyploid breeding in various species, particularly for developing new seedless citrus varieties. The underlying mechanisms of 2n gamete formation were recently revealed for Citrus reticulata but remain poorly understood for other citrus species, including lemon (C. limon [L.] Burm. f.). Here, we investigated the frequency and causal meiotic mechanisms of 2n megagametophyte production in lemon. We genotyped 48progeny plants of two lemon genotypes, “Eureka Frost” and “Fino”, using 16 Simple Sequence Repeat (SSR) and 18 Single Nucleotide Polymorphism (SNP) markers to determine the genetic origin of the progenies and the underlying mechanisms for 2n gamete formation. We utilized a maximum-likelihood method based on parental heterozygosity restitution (PHR) of centromeric markers and analysis of PHR patterns along the chromosome. The frequency of 2n gamete production was 4.9% for “Eureka Frost” and 8.3% for “Fino”, with three meiotic mechanisms leading to 2n gamete formation. We performed the maximum-likelihood method at the individual level via centromeric marker analysis, finding that 88% of the hybrids arose from second-division restitution (SDR), 7% from first-division restitution (FDR) or pre-meiotic doubling (PRD), and 5% from post-meiotic genome doubling (PMD). The pattern of PHR along LG1 confirmed that SDR is the main mechanism for 2n gamete production. Recombination analysis between markers in this LG revealed partial chiasma interference on both arms. We discuss the implications of these restitution mechanisms for citrus breeding and lemon genetics. INTRODUCTION The exact area of origin of lemon (Citrus limon [L.] Burm. f.) is uncertain, but this crop likely originated in Northern India and South East China or in northern Myanmar (Curk et al., 2016). Molecular analyses indicate that this species resulted from direct hybridization between C. aurantium L. (sour orange) as the female parent and C. medica L. (citron) as the male parent (Nicolosi et al., 2000;Froelicher et al., 2011;García-Lor et al., 2013;Curk et al., 2016). The Mediterranean Basin is a major area of lemon production, accounting for 48% of production worldwide (Duportal et al., 2013). Turkey is the most important lemon-producing country in this area (annual production >1,000,000 tons), followed by Spain (900,000 tons) and Italy (500,000 tons) (Martín and González, 2014). Seedless lemons with high organoleptical qualities and resistance to important diseases, such as Mal secco caused by Phoma tracheiphila, are in high demand by consumers and growers (Uzun et al., 2008;Migheli et al., 2009;Pérez-Tornero et al., 2012). Several lemon-breeding programs worldwide are focused on meeting this demand (Calabrese et al., 2000;Recupero et al., 2005;Spiegel-Roy et al., 2007;Uzun et al., 2008;Pérez-Tornero et al., 2012), despite the difficulties imposed by the high heterozygosity and low genetic variation of this species (Krueger and Navarro, 2007). The frequency of 2n female gametes, an intrinsic characteristic of citrus genotypes, can vary from <1% to over 20% (Esen and Soost, 1971;Ollitrault et al., 2008). For C. limon, 1 and 5% of triploid progenies were recovered from 2x X 2x sexual hybridizations using "Lisbon" and "Eureka" lemons as the female parents, respectively Geraci et al., 1975). Moreover, Pérez-Tornero et al. (2012) obtained 5.8 to 8.6% of triploid hybrids from a 2x X 2x cross between "Verna" and "Fino" genotypes. Various meiotic aberrations can result in unreduced gamete formation. First-division restitution (FDR) and seconddivision restitution (SDR) are the predominant mechanisms of 2n gamete formation in plants (De Storme and Geelen, 2013). These gametes are produced as a consequence of the failure of the first or second meiotic division, respectively, leading to the formation of restitution nuclei with a somatic chromosome number (Mendiburu and Peloquin, 1976;Park et al., 2007). As a result, FDR and SDR have different genetic implications. FDR 2n gametes contain non-sister chromatids, which in the absence of crossover maintain the parental heterozygosity. When crossing over occurs, the parental heterozygosity restitution (PHR) rates vary from 100% for loci close to the centromere to 60-70% for loci far from the centromere, depending on the level of chromosome interference (Cuenca et al., 2011). For SDR, the 2n gametes contain two sister chromatids, which reduces the parental heterozygosity level (Bastiaanssen et al., 1998;Cuenca et al., 2011;De Storme and Geelen, 2013). When crossing over occurs, the PHR rate varies from 0% for loci close to the centromere to 60-75% for loci far from the centromere, depending on the level of chromosome interference (Cuenca et al., 2011). SDR is the dominant mechanism involved in the origin of unreduced female gametes in clementines and mandarins (Luro et al., 2004;Cuenca et al., 2011Cuenca et al., , 2015Aleza et al., 2016). Ferrante et al. (2010) reported that FDR is the main mechanism for unreduced female gamete formation in lemon. However, their results were based on the analysis of only a few individuals with few markers and without previous knowledge of centromere location. Other mechanisms leading to unreduced gamete formation have been described, such as pre-meiotic (PRD) and post-meiotic genome doubling (PMD). Although, PMD was identified in potato (Bastiaanssen et al., 1998), both mechanisms have only rarely been documented in plants (De Storme and Geelen, 2013). PRD produces 2n gametes equivalent to the meiosis of doubled diploid genotypes. Therefore, PHR depends mainly on the chromosomal preferential pairing rate (Stift et al., 2008), which should vary between 66% for fully tetrasomic meiosis to 100% for fully disomic meiosis. Little variation can occur along the chromosome due to double reduction events. In the case of PMD, haploid gametes undergo an extra round of genome duplication, leading to the formation of fully homozygous 2n gametes (Bastiaanssen et al., 1998;Ramanna and Jacobsen, 2003;De Storme and Geelen, 2013;Cuenca et al., 2015). Thus, 100% homozygosity for all loci is expected among the 2n gametes (Ramanna and Jacobsen, 2003). SDR can also produce 100% homozygosity for centromeric markers, but not for telomeric ones (Cuenca et al., 2011). Therefore, in order to distinguish between both mechanisms, Cuenca et al. (2015) genotyped telomeric loci to determine whether diploid gametes fully homozygous for centromeric markers resulted from PMD or SDR. Moreover, Bastiaanssen et al. (1998) identified 2n female gametes of potatoes fully homozygous for RFLP markers. The evidence for recombination between alleles originating from the two ancestors of the parent producing 2n gametes indicated that these gametes originated from PMD. Molecular marker analyses can be used to estimate the PHR rates for diploid gametes in polyploid progenies and, therefore, to identify the mechanisms underlying unreduced gamete formation (Cuenca et al., 2011). Cuenca et al. (2015) took advantage of known citrus centromere locations to develop a maximum-likelihood method that distinguishes between SDR and FDR mechanisms at both the population and individual levels based on the PHR patterns of unlinked markers located close to the centromeres of different chromosomes. In the current study, we analyzed the frequencies of 2n gamete formation and the causal meiotic mechanisms leading to 2n gamete formation in two varieties of lemon, "Eureka Frost" and "Fino", through genetic analyses of triploid and tetraploid hybrids recovered from 2x X 2x and 2x X 4x sexual hybridizations. We used the maximum-likelihood method based on centromeric molecular markers in conjunction with a telomeric loci study and analysis of the pattern of PHR variation along linkage group 1 (LG1) to identify the mechanisms underlying unreduced gamete formation at the individual and population level. Crossover interference was also analyzed. We discuss the implications for breeding programs based on sexual polyploidization. Plant Material Triploid and tetraploid citrus hybrids were obtained via 2x X 2x and 2x X 4x sexual hybridizations using diploid "Eureka Frost" and "Fino" lemon genotypes as female parents pollinated with diploid "Fortune" mandarin (C. clementina x C. tangerina) and C. ichangensis Swing and tetraploid C. macrophylla Wester. Flowers in pre-anthesis were emasculated, pollinated, and enclosed with a cloth bag. A total of 115 "Eureka Frost" lemon flowers were pollinated, including 55 with "Fortune" mandarin (named EuFor) and 60 with C. ichangensis (named EuIch), while 15 "Fino" lemon flowers were pollinated with tetraploid C. macrophylla (named FinMac). The detailed methods used for plant recovery via in vitro embryo rescue and ploidy level analysis via flow cytometry can be found in Aleza et al. (2010a;2010b;. Genotyping of Progenies Using Simple Sequence Repeat (SSR) and Single Nucleotide Polymorphism (SNP) Markers The male and female parents and 48 hybrids were genotyped using 34 molecular markers (16 Simple Sequence Repeats [SSRs] and 18 Single Nucleotide Polymorphisms [SNPs]) showing heterozygosity for the lemon genotypes and polymorphism with the male parents. These markers are distributed across all LGs of the clementine genetic map . Detailed information about the markers is provided in Table 1. Genomic DNA was isolated using a Plant DNeasy kit from Qiagen Inc. (Valencia, CA, USA) following the manufacturer's protocol. PCR amplifications using 16 SSR markers were performed using a Thermocycler rep gradient S (Eppendorf R ) in a 10 µL final volume containing 0.8 U of Taq DNA polymerase (Fermentas R ), 2 ng/µL citrus DNA, 0.2 mM welled (Sigma R ) dye-labeled forward primer, 0.2 mM non-dye-labeled reverse primer, 0.2 mM of each dNTP, 10 × PCR buffer, and 1.5 mM MgCl 2 . The PCR protocol was as follows: denaturation at 94 • C for 5 min followed by 40 cycles of 30 s at 94 • C, 1 min at 50 or 55 • C, and 45 s at 72 • C; and a final elongation step of 4 min at 72 • C. Capillary electrophoresis was carried out using a CEN TM 8,000 Genetic Analysis System (Beckman Coulter Inc.). The PCR products were initially denatured at 90 • C for 2 min, injected at 2 kV for 30 s, and separated at 6 kV for 35 min. Alleles were sized based on a DNA size standard (400 bp). Genome Lab TM Gap v.10.0 genetic analysis software was used for data collection. Allele dosage was calculated using the MAC-PR (microsatellite DNA allele counting-peak ratio) method (Esselink et al., 2004), validated in citrus by Cuenca et al. (2011). Triploid and tetraploid hybrids were also genotyped with 18 SNP markers using KASPar TM technology by LGC Genomics (www.lgcgenomics.com). The KASPar Genotyping System is a competitive, allele-specific dual Förster Resonance Energy Transfer (FRET)-based assay for SNP genotyping. Primers were directly designed by LGC Genomics Company based on the SNP locus-flanking sequence (∼50 nt on each side of the SNP). SNP genotyping was performed using the KASPar technique. A detailed description of specific conditions and reagents can be found in Cuppen (2007). Identification of allele doses in heterozygous triploid and tetraploid hybrids was carried out based on the relative allele signals, as described by Cuenca et al. (2013a) and Aleza et al. (2015). Identification of the Parent Producing the Unreduced Gamete and Inference of the Unreduced Gamete Genotype For triploid and tetraploid hybrids, the 2n gamete origin was determined by identifying the parent that passed double genetic information onto the hybrid. Markers with total differentiation between the parents (A 1 A 1 x A 2 A 2 A 2 A 2 , A 1 A 2 x A 3 A 3 A 3 A 3 , and A 1 A 2 x A 3 A 3 A 4 A 4 in 2x X 4x crosses) for tetraploids and (A 1 A 1 x A 2 A 2 , A 1 A 2 x A 3 A 3 , and A 1 A 2 x A 3 A 4 in 2x X 2x crosses) for triploids were the best allelic configurations, as described by Aleza et al. (2015) and Cuenca et al. (2015). Indeed, conclusive results can be obtained using only one marker, as was the case for FinMac hybridization using the JK-TAA41 SSR marker. However, for EuFor and EuIch hybridizations, more than one marker had to be analyzed to observe both alleles from the female parent at least once for each hybrid. The SSRs JK-TAA1, JK-TAA41, and MEST131 were used for EuFor hybridization, and JK-TAA1, JK-TAA41, MEST001, and Ci02B07 were used for EuIch. Once the female origin of the diploid gamete was demonstrated, inference of the allelic configurations of the 2n gametes from hybrid genotyping was performed as described by Cuenca et al. (2011). In the case of FinMac tetraploid hybridization, for the A 1 A 2 × A 3 A 3 A 3 A 3 and A 1 A 2 × A 3 A 3 A 4 A 4 allelic configurations, the genotype of the unreduced gamete was deduced directly from observation of both A 1 and A 2 alleles in the tetraploid hybrids. However, when the male and female parents shared one allele (A 1 A 2 × A 1 A 1 A 1 A 1 and "Fino" C. macrophylla Frontiers in Plant Science | www.frontiersin.org A 1 A 2 × A 1 A 1 A 3 A 3 ), for the tetraploid hybrids that inherited the common allele (A 1 ), inference of the unreduced female gamete structure was carried out based on the estimated allele dosage in the tetraploid hybrid. C. ichangensis In the case of triploid hybrids obtained from EuFor and EuIch hybridizations, for A 1 A 2 x A 3 A 3 and A 1 A 2 x A 3 A 4 , the genotype of the 2n gamete was deducted directly from the triploid hybrid structure. When the male and female genitors shared one allele (A 1 A 2 x A 2 A 2 and A 1 A 2 x A 2 A 3 ), the 2n female gamete structure for the triploid hybrids with a common allele from the male genitor was inferred from the estimated allele dosage in the triploid hybrid. To distinguish between the SDR and FDR hypotheses, the maximum-likelihood method based on the LOD score test described by Cuenca et al. (2015) was employed. LODs >2 were considered to be significant for SDR, those < −2 were considered to be significant for FDR, and those between 2 and −2 were considered not to be significant. To compare the SDR hypothesis with the PRD hypothesis using LOD scores, we considered the minimum value of 66% of PHR as the theoretical value for the PRD hypothesis. Interference Analysis Taking into account the centromere position, three-point linkage mapping was performed to estimate chiasma interference for each chromosome arm of chromosome I. The centromere was used as the first point, and two markers were selected on each arm (MEST001 and MEST431 on one arm and mCrCIR06B05 and CIBE6126 on the other arm). The chromosome interference coefficient (IC) is defined as follows (as per Griffiths et al., 1996): Where r CM1 indicates the observed recombination rate (heterozygous to homozygous and vice versa) between the centromere and locus 1; r M1M2 , the observed recombination between locus1 and 2; and rd, the observed rate of double recombination between the centromere and locus 2. Parental Origin of Recovered Plants and Frequencies of Unreduced Gametes For sexual hybridizations between "Eureka Frost" lemon as the female parent and "Fortune" mandarin and C. ichangensis as the male parents, the average fruit set was 45.5 and 36.7%, respectively ( Table 2), yielding 250 and 464 seeds, respectively, from both hybridizations. We classified the seeds by size, since, according to Aleza et al. (2010a), seed size is highly correlated to ploidy level. While small seeds are expected to contain triploid embryos, tetraploids are generally observed in normal size seeds. Thus, we selected 45 and 40 small seeds from the EuFor and EuIch hybridizations, respectively, for plant regeneration by embryo rescue. From the 45 small seeds obtained in the EuFor hybridization, 54 embryos were cultured in vitro, with an average of 1.2 embryos per seed, indicating a low rate of polyembryony in "Eureka" lemon. Of the 53 plantlets recovered, 32 were diploid and 21 triploid. All 40 small seeds recovered from the EuIch hybridization contained only a single embryo. Of the 35 plants regenerated, 21 were diploid and 14 were triploid. For the FinMac 2x X 4x sexual hybridization, the average fruit set was 53.3%, and 36 normal seeds were obtained according to the size classification of Aleza et al. (2012b). Of the 36 plants recovered, 23 were triploid and 13 were tetraploid ( Table 2). To determine which parent passed double genetic information onto the hybrids, we genotyped triploid hybrids recovered from the 2x X 2x hybridizations using markers that displayed total allelic differentiation between "Eureka Frost" lemon and the male parents, "Fortune" mandarin and C. ichangensis (Figure 1): SSR markers JK-TAA1, JK-TAA41, and MEST131 for the EuFor hybridization and SSR markers JK-TAA1, JK-TAA41, MEST001, and Ci02B07 for the EuIch hybridization. Genetic analysis enabled us to unequivocally identify the hybrid origins of all triploid plants, except for one plant from the EuFor sexual hybridization and four from the EuIch sexual hybridization, which were rejected since they could have originated from autopollination of the female parents. Genetic analysis showed that "Eureka Frost" lemon produced the 2n gametes for all triploid hybrids, as shown in Figure 1. Hybridization Pollinated flowers Fruits set tetraploid C. macrophylla, allowing us to conclude that all plants were hybrids and that "Fino" lemon produced the 2n gametes (Figure 1). Analysis of the genetic origins of the 23 triploid plants recovered from this 2x X 4x hybridization showed that, as expected, they were obtained from the union of a normal reduced haploid female gamete and a normal reduced diploid pollen gamete, as previously observed in other citrus species (Aleza et al., 2012a). The frequency of 2n gametes was shown to be genotypedependent in citrus and in other herbaceous and woody plants such as Brassica, potato, and peach (Dermen, 1938;Mok et al., 1975;Ollitrault et al., 2008;Aleza et al., 2010a;Mason et al., 2011;Younis et al., 2014). This hypothesis is supported by the genetic improvement of unreduced gamete rates for Trifolium (frequencies increased from 0.04 to 47%) and Medicago sativa (from 9 to 78%) in only three generations of recurrent selection (Gallais, 2003). In the current study, we observed a rate of 4.9% 2n gametes in the 2x X 2x hybridizations (EuFor and EuIch), whereas, in the 2x X 4x hybridization (FinMac), the percentage was higher (8.3%). These differences might be due to a genotypic effect of the parents, but are more likely due to the modification of the embryo/endosperm ploidy level ratio in interploid hybridizations. Esen and Soost (1971) reported that, in diploid plants, when an unreduced gamete is pollinated with normal reduced pollen, the embryo/endosperm ploidy level ratio (3/5) is less favorable for embryo development than that for normal diploid embryos (2/3), whereas the pollination of a 2n female gamete with diploid pollen in 2x X 4x sexual hybridizations provides the correct embryo/endosperm ploidy level ratio (4/6 = 2/3), leading to normal seed development. Therefore, 2x X 4x hybridization appears to be a more favorable situation for revealing unreduced gametes via the development of tetraploid embryos in normal seeds. Mechanism of Unreduced Gamete Formation To determine the mechanism leading to unreduced gamete formation, we used nine unlinked molecular markers localized in the nine LGs for EuFor and EuIch and seven markers in seven different LGs for FinMac to perform a LOD score test for SDR/FDR and SDR/PRD probability ratios for all genotypes analyzed (Tables 3, 4, 5). The analysis of six markers covering LG1 and additional telomeric loci allowed us to distinguish between SDR and PMD when the inferred gametes were totally homozygous for the centromeric loci. LOD Score Analysis For the EuFor hybridization, 20 triploid hybrids were genotyped using nine centromeric loci found in all LGs. Ten of the inferred 2n gametes were totally homozygous for these markers. However, all displayed at least one heterozygous marker when six markers covering LG1 were analyzed, allowing the PMD hypothesis to be rejected for all inferred 2n gametes. For the SDR/FDR hypothesis test at the individual level, 19 inferred 2n gametes displayed LOD values >2 (ranging from 12.05 to 15.22; Table 3). For the same 19 gametes, the LOD values for SDR/PRD were also >2. Therefore, these 19 plants were considered to have originated from SDR. One plant obtained negative LODs of −4.52 and −6.86 for the SDR/FDR and SDR/PRD hypotheses, respectively, suggesting that this plant is of FDR or PRD origin. At the population level, the LOD values were 267.82 and 57.03 for the SDR/FDR and SDR/PRD hypotheses, respectively, revealing a high rate of SDR. For EuIch hybridization, 10 triploid hybrids were genotyped with nine centromeric markers located on all LGs. Two inferred 2n gametes were totally homozygous for these markers, but at least one heterozygous locus was observed for each 2n gamete in the complementary analysis of PHR along the LG1, thus discarding the PMD hypothesis. At the individual level, eight plants displayed LOD values >2 for SDR/FDR (from 8.69 to 14.53), rejecting the FDR hypothesis (Table 4). Among them, seven displayed a LOD >2 for SDR/PRD (ranging from 2.13 to 3.86) and were considered to have arisen from SDR. The LOD value for the remaining 2n gamete was 0.55, suggesting that this 2n gamete had arisen from SDR rather than PRD, but, since this value is below our threshold, this result is not conclusive. Two plants produced negative LOD values (< −2) in both the SDR/FDR and SDR/PRD tests, suggesting that they originated by FDR or PRD. The population LODs were 80.21 and 2.77 for SDR/FDR and SDR/PRD respectively, confirming the predominance of the SDR mechanism. For FinMac, 13 tetraploid hybrids were genotyped with seven centromeric markers (LGs 1, 2, 4, 6, 7, 8, and 9). Six inferred 2n gametes were totally homozygous for these markers ( Table 5). Among these, two unreduced gametes (from FinMac 12 and FinMac 13) remained totally homozygous after analyzing six markers covering LG1 and were subjected to additional analysis to distinguish between the SDR and PMD hypothesis. The 11 2n gametes with at least one heterozygous locus produced LOD values >2 for SDR/FDR, rejecting the FDR hypothesis. Among these, four displayed LOD values of 2.81 for the SDR/PRD test and were therefore considered to have arisen from SDR. The seven remaining 2n gametes displayed positive values ranging from 0.52 to 1.91. These gametes had a higher probability of arising from SDR than from PRD, but this result is not conclusive because the values are below our threshold. The population LOD values were 78.84 and 19.81 for SDR/FDR and SDR/PRD, respectively, again confirming the prevalence of SDR. The seven 2n gametes with inconclusive individual LODs display a population LOD of 43.12 and 8.56 for SDR/FDR and SDR/PRD, respectively. It is therefore highly probable that they also arose from SDR. Pattern of Heterozygosity Restitution along Lg1 For 2n Gametes with An Identified SDR Origin and Undetermined SDR/PRD Origin To validate, at the population level, the finding that 38 2n gametes were derived by SDR (as determined by individual LOD analysis) and to distinguish between SDR and PRD for the eight gametes with inconclusive individual LODs, we compared the PHR patterns of the two set of gametes in LG1. For this analysis, we used four SSR markers (CIBE6126, mCrCIR06B05, MEST001, and MEST431) and two SNP markers (CiC2110-02 and CiC5950-02) (Figure 2) mapped in LG1 (Figures 3, 4). For the conclusive SDR 2n gametes, the PHR values in LG1 (Figure 3) decreased from 67% for the telomeric marker LODs > 2 are significant for SDR. LOD < −2 are significant for FDR or PRD. LODs between 2 and −2 are not significant. HE: Heterozygous and HO Homozygous. CIBE6126 to 3% for the centromeric marker mCrCIR06B05 and progressively increased to 77% when moving toward the other telomeric marker, MEST431. The average PHR value was 42%. For the eight inconclusive 2n gametes, the same PHR pattern was observed: the lowest value was obtained for the centromeric marker mCrCIR06B05 (0%) and the highest for the telomeric markers (63% for CiC2110-02 in one telomere and 75% for MEST431 in the other). The average PHR for these eight gametes was 46% (Figure 3). These PHR patterns totally fit the profile for SDR. The average PHR value over the two sets of 2n gametes was 43%. Various studies have indicated that the global restitution of heterozygosity is expected to be near 80% for FDR and 40% for SDR, assuming a random distribution of heterozygous loci along the chromosomes (Peloquin, 1983;Hutten et al., 1994;Carputo et al., 2003). Both the patterns along LG1 and the average PHR values comply with the SDR hypothesis. Therefore, we conclude that the eight 2n gametes of indeterminate origin identified from the individual LOD (SDR/PRD) analysis also originated from SDR. Under this conclusion, the PHR pattern in LG 1 is very similar for "Eureka Frost" and "Fino" lemon SDR 2n gamete populations (Figure 4). Distinction between SDR and PMD for Fully Homozygous 2n Gametes We performed additional analyses of the two inferred 2n gametes (FinMac 12 and FinMac 13 tetraploid plants) fully homozygous for the seven centromeric markers and the six markers of LG1. Fully homozygous 2n female gametes for centromeric loci can originate through SDR or PMD, with different consequences for the genetic structures of 2n gametes. Bastiaanssen et al. (1998) defined two conditions that are necessary to conclude that PMD rather than SDR has occurred, i.e., 100% homozygosity for all genotyped loci and the occurrence of recombination between homozygous alleles in the same LG. Therefore, we genotyped FinMac 12 and FinMac 13 using 11 telomeric loci found in different LGs to provide genetic evidence for a particular PMD mechanism. The average distance from these markers to their corresponding centromere is 53.22 cM (ranging from 25.32 to 89.59 cM). Both plants were homozygous for all molecular markers analyzed. Furthermore, C. limon is a direct hybrid between two genetically distant genotypes, C. aurantium and C. medica (Nicolosi et al., 2000;Curk et al., 2016), and the specific origins of the homozygous alleles can easily be distinguished. We found that some homozygous markers of the same LG were inherited from the C. aurantium ancestor and the others from C. medica. For example, multilocus analyses of the homozygous alleles in LG1 (Figure 5) revealed interspecific recombination in the two plants with alternation of homozygosity originated from C. aurantium and C. medica. Consequently, according to Bastiaanssen et al. (1998), the observation of 100% homozygosity and recombination between C. aurantium and C. medica along the same LG provides evidence discarding the SDR mechanism and leads us to conclude that these two 2n gametes originated through PMD. To our knowledge, this is the first report of the identification of a new mechanism, Post-Meiotic genome Doubling, leading to 2n ovule gametes in citrus, and specifically in lemon. FIGURE 4 | Evolution of maternal heterozygosity restitution values of the analyzed SSR and SNP markers in LG 1 considering both populations, "Eureka Frost" and "Fino" lemon SDR-2n gametes. Black dots indicate the centromere position on the reference clementine genetic map . FIGURE 5 | Multilocus configuration of the two fully homozygous plants recovered from FinMac hybridization with six molecular markers located on LG 1. Yellow indicates the presence of homozygous alleles inherited from C. aurantium, and green indicates those from C. medica. Synthesis of Different Approaches On the whole, we conclude that 38 (88%) of the 2n gametes analyzed had arisen from SDR, three (7%) from FDR or PRD, and two (5%) from PMD. At the population level, SDR appears to be by far the most common mechanism for 2n ovule formation in both C. limon genotypes, "Eureka Frost" and "Fino". This is the first report of the production of a large number of lemon progenies from 2n gametes produced by different mechanisms of unreduced ovule gametes. Luro et al. (2004), Aleza et al. (2015), and Cuenca et al. (2015) also found that SDR was the predominant mechanism leading to 2n megagametophyte production in mandarins. Among the 19 mandarins investigated, the authors concluded that only 1.1 and 2.9% of plants were recovered from FDR in the "Ellendale" and "Fortune" mandarins, respectively. The coexistence of SDR and FDR has been recently observed in unreduced pollen gametes by Rouiss et al. (2017). 53 plants were obtained from 2n pollen gametes produced by a diploid hybrid between clementine and sweet orange. FDR was the predominant mechanism (77%) and SDR was the mechanism for the remaining plants (23%). In addition, FDR was the main mechanism for 2n female gamete production in "Femminello" lemon (Ferrante et al., 2010). These results are questionable because the authors used only a few molecular markers and lacked previous information about centromere location and the relative distances between the markers and the centromeres. With the recent location of centromeres in the citrus genetic map Aleza et al., 2015), the markers used by Ferrante et al. (2010), JK-TAA1, JK-TAA15, JK-TAA41, and NB-GT03, are located at 87.29, 59.07, 74.99, and 50.47 cM from the centromere of the LGs 6, 1, 2, and 8 respectively, being mostly telomeric, and therefore the high PHR values obtained in their study can fit both SDR or FDR mechanisms. At the methodological level, we demonstrated the power of using two complementary approaches, namely, analysis of the PHR pattern in one LG with the maximum-likelihood method proposed by Cuenca et al. (2015). Considering only centromeric loci, different mechanisms can lead to the same homozygous patterns. Therefore, analyzing the heterozygosity restitution pattern along LGs at the individual level is a useful approach for distinguishing between SDR and PMD, since, under this mechanism, the heterozygosity restitution value is zero for all markers in all LGs. After LOD analysis at the individual level, this method is used to analyze PHR patterns at the population level to distinguish between SDR and PRD when individual LODs are under the threshold required to obtain conclusive results. When enough number of individuals is analyzed, this technique should also be utilized to distinguish between FDR and PRD. With FDR-2n gametes, heterozygosity restitution varies from 100% in centromeric loci to close to 66% in telomeric areas under the non-interference model (Cuenca et al., 2011), whereas, with PRD, heterozygosity restitution is expected to be very similar along the entire chromosome. Crossover and Interference Analysis Crossover interference ensures the appropriate distribution of crossovers along the chromosome, since one crossover reduces the likelihood of other crossovers occurring nearby (Youds et al., 2010). The analysis of crossover rates ( Table 6) for both arms of chromosome I revealed the presence of up to four crossovers on one arm and three on the other arm. In addition, three complementary crossovers (double crossing over involving four chromatids) were observed as a result of phase-changing between two homozygous markers. Similarly, Cuenca et al. (2011) and Aleza et al. (2015) detected up to four crossovers on one arm and complementary crossovers in "Fortune" mandarin and C. clementina. We estimated the IC for each chromosome arm, finding partial interference in both arms (IC = 0.27 and 0.44). Such variation in interference values between both arms has also been observed in other citrus species, ranging from 0.82 to 0.48 for "Fina" clementine on LG 1 and 0.73 to 0.53 for "Fortune" mandarin on LG 2 (Cuenca et al., 2011). Variation TABLE 6 | Number of observed crossover events on each arm of chromosome I based on analysis of 27 genotypes recovered from "Eureka Frost" lemon pollinated with C. ichangensis and "Fortune" mandarin using six molecular markers. in the level of interference between different parts of the genome has also been observed in Arabidopsis (Drouaud et al., 2007), humans (Lian et al., 2008), and mice (Broman et al., 2002). Implications of Sexual Polyploidization for Breeding Triploid Lemon-Like Plants Sexual polyploidization via 2n gametes and interploid sexual hybridizations using tetraploid parents (doubled diploids) are the main strategies used to produce triploid citrus hybrids (Ollitrault et al., 2008;Aleza et al., 2010bAleza et al., , 2012aAleza et al., ,b, 2016Navarro et al., 2015). These different strategies and the different meiotic behaviors result in different genetic structures in the diploid gametes and, consequently, the resulting triploid progenies. The three hybrids obtained via FDR or PRD 2n gametes have a higher rate of heterozygosity than hybrids obtained via SDR. By contrast, the two plants obtained by PMD transmit 0% of PHR (Bastiaanssen et al., 1998). Therefore, such a mechanism generally promotes inbreeding in the hybrid progenies (Tai, 1986;Gallais, 2003). However, these lines constitute interesting parentals to be used as test lines in inheritance studies (Bastiaanssen et al., 1998). In addition, the mechanism that generates the 2n gametes affects the breeding efficiency for a character in relation to the genetic distance to the centromeres of the major genes controlling this character. For instance, Cuenca et al. (2013bCuenca et al. ( , 2016 found that resistance to Alternaria brown-spot fungal disease is a recessive trait controlled by a single locus located at 10.5 cM from the centromere of chromosome III. Therefore, in crosses between a heterozygous parent producing diploid gametes and a resistant genotype, PMD is the most favorable mechanism (50% of resistant hybrids), followed by SDR (40%). Under FDR, only 5% of the hybrids will be resistant. For diploid gametes produced by the doubleddiploid genotype or resulting from PRD, the rates of resistant hybrids should vary from 16% (tetrasomic segregation) to 0% (disomic segregation) according to the preferential pairing behavior. The aim of some lemon-breeding programs is to produce new lemon-like types of fruit, which essentially involves 2x X 4x crosses using diploid lemons as female parents and more or less complex hybrids as tetraploid parents (Recupero et al., 2005;Viloria and Grosser, 2005). This approach is used in an attempt to solve some of the problems caused by the low genetic variation of C. limon, although relatively few tetraploids are available. This approach has allowed for the selection and protection of the triploid "Lemox", a hybrid between a diploid female complex hybrid, and tetraploid lemon (Recupero et al., 2005). "Lemox" produces quality fruits resembling lemons with high tolerance to Mal secco. The 2n lemon gametes will be very useful for producing new lemon-like seedless citrus types via 2x X 2x hybridizations, thereby dramatically increasing the gene pool of genotypes that could be used as parents. Furthermore, the production of 2n gametes has been investigated in a small number of lemon genotypes. Evaluating the many existing lemon genotypes may result in the detection of specific genotypes that produce higher rates of 2n gametes and (eventually) genotypes with different ratios of FDR and SDR 2n gametes, which will increase the efficiency of breeding programs. CONCLUSION Genetic analysis with SSR and SNP markers revealed that two genotypes of C. limon, "Eureka Frost" and "Fino", produced 2n female gametes. The frequencies of 2n gametes were 4.9 and 8.3% for "Eureka Frost" and "Fino" lemons, respectively. The use of complementary methods, including individual LOD analysis from centromeric loci, telomeric loci genotyping, and the analysis of PHR patterns along a LG, allowed us to distinguish among the different mechanisms of 2n gamete formation. We detected three meiotic mechanisms in lemon, with 88% of 2n female gametes arising from SDR, 7% from FDR or PRD, and 5% from PMD. To our knowledge, this is the first report of the production of a large number of lemon progenies from 2n gametes and the identification of a new mechanism, PMD, which had never been observed in citrus and rarely been described in other herbaceous or woody species. From the breeding point of view, the production of SDR-2n gametes would allow progenies with polymorphic genetic structures to be recovered, increasing the likelihood of obtaining new phenotypes by creating an increasing number of novel multilocus allelic combinations. The coexistence of different mechanisms for 2n gamete formation broadens the diversity of lemon 2n gametes and, therefore, their potential for breeding. AUTHOR CONTRIBUTIONS LN, PO, and PA conceived and designed the experiments. HR performed the experiments. HR and PA analyzed the data. JC and PO provided a statistical method for the estimation of SDR and FDR mechanisms. HR, PA, LN, and PO wrote the manuscript. FUNDING This work was supported by a grant RTA2015-00069-00-00 from the Ministry of "Economía y Competividad" and "Instituto Nacional de Investigación y Tecnología Agraria y Agroalimentaria".
8,393
2017-07-12T00:00:00.000
[ "Biology", "Environmental Science" ]
Revised and Extended Mobile Commerce Technology Adaption Model This research is designed to cover literature gaps in the intention to adopt mobile commerce in Jordan as development country. In one hand we explored and identified the non-technological factors that affect the intention to adopt mobile commerce. In the other hand we introduced a revised and extended mobile commerce technology adaption model based on the available literature and based on the Technology Acceptance Model (TAM). Our result shows that our proposed model is valid. Our model validity was confirmed using Loading Factor and Kaiser-MayerOlkin (KMO). The result of this research shows that Perceived Usefulness (PU), Perceived Ease of Use (PEOU), privacy, compatibility, government policy, legal protection, risk, cost and social-culture values factors have a direct significant effect in the intention to adopt mobile commerce. This research also finds that those factors are different in their effect in the mobile commerce adoption decision where legal protection factor has the highest impact in mobile commerce adoption decision while perceived usefulness factor has the lowest impact in making such decision. The result also shows that there is a positive relationship between all study factors and the intention to adapt mobile commerce except for risk and cost factors. INTRODUCTION "Mobile Commerce is any transaction that involve the transfer of ownership to use goods and services, which is initiated and/or completed by using mobile access to computer-mediated networks" (Tiwari and Buse, 2010).Both mobile commerce and electronic commerce involve conducting transactions over the Internet, but in m-commerce transactions are done over mobile networks.In mobile commerce the consumers can conduct transactions from remote locations using their mobile devices to saves time and costs.Mobile commerce also can be useful for the companies by allowing them to reach a wider range of customers without time and space limitations.Mcommerce really affect various fields of our life, i.e., finance, industry and many others.Because of mcommerce benefits; the adaption of firms in mcommerce has increased day after day; this will introduce so many types of business services in low cost. There are many economical, technical, cultural, legal and political factors facing m-commerce adoption in developed countries such as Jordan, these factors should be identified.To identify those factors; the researchers used the Technology Acceptance Model (TAM) in many cases in the field o f M-Commerce adoption and acceptance of new technology (Kini, 2009;Saad and Suleiman, 2010;Tarasewich et al., 2002;Wixom and Todd, 2005;Wu and Wang, 2005;Yang, 2005;Sudha et al., 2010;Saifullah Sadi and Mohamad Fauzan, 2011;Basem and Bassam, 2012;Ghassan et al., 2013;Feras et al., 2013).This research will study the factors that affect the intention to adopt mobile commerce in Jordan from non-technological point view since the technological factors are studied by other researchers (Feras et al., 2013).Also a revised M-Commerce technology adaption model is introduced and validated by the researchers in this study.Saad and Suleiman (2010) applied TAM and found that perceived trust, perceived usefulness, perceived ease of use, social and cultural values have significant effect on the intention to deploy mobile commerce technology and they found also that the economical issue is not significant.Yang (2005) used and employ the Technology Acceptance Model (TAM) to examine factors affecting Singaporeans attitudes toward mcommerce, the results for this research shows that there is positive relationships between the following factors: PU, PEOU, AT, innovativeness, adoption behavior and demographics and between the adoption of Mcommerce.YANG Results found and support the applicability of TAM and its extension to examine and test M-commerce adoption by Singapore consumers.Wu and Wang (2005) presents an extended Technology Acceptance Model (TAM) that integrates innovation diffusion theory, perceived risk and cost into the TAM to investigate what determines user Mobile Commerce (MC) acceptance factors.They find that all o f t h e previous variables except perceived ease of use affect the behavioral intention of the users.Also t hey found that compatibility had the most significant effect.Kini (2009) in his study to the electronic and mobile commerce adoption in Chile shows that the following factors: mobile access speed, service quality and price, needs improvement.Tarasewich et al. (2002) identifies then categorizes some of m-commerce issues so that interested people have a starting point for focusing their activities within the m-commerce area.In Feras et al. (2013), the authors introduced extended and revised mobile commerce technology adaption model suitable for Jordan as development country, the model covered only the technological barriers for mobile commerce technology adaption.In Ghassan et al. (2013) the authors studied a subset from the non-technological barriers such as personal and societal norms perspectives that affect the adoption of M-Commerce in Jordan.In Basem and Bassam (2012) the authors explored trust and social influence factors that affect the customer acceptance of M-Commerce Services in Jordan, this study covers the actual use of M-Commerce not the intention to use of M-Commerce. By reviewing the previous available literature; we identified four gaps that should be evaluated and resolved.First: there is no valid Mobile Commerce Technology Adaption Model suitable for Jordan as development country.Second: no research covers all the non-technical factors (such as social, economical, legal, cultural and political) that affect the intention to adopt mobile commerce in Jordan as development country.Third: most of the previous studies studied and focused on the factors that affect the actual use of Mobile Commerce, in this study we will focus on the barriers that affect the intention to use Mobile Commerce.Forth: the previous studies in Jordan and in similar development countries chooses study populations that are usually not aware of mobile commerce since the mobile commerce is not implemented in those countries, to overcome this gap; our study chooses the mobile commerce companies employees as the study population. Technology Acceptance Model (TAM) is usually used to study the adoption and acceptance of new technology in the field of Information Systems (IS) by many researchers in many situations.TAM usage also is Popular in m-commerce adoption and acceptance of new technology (Kini, 2009;Saad and Suleiman, 2010;Tarasewich et al., 2002;Wixom and Todd, 2005;Wu and Wang, 2005;Yang, 2005;Sudha et al., 2010;Saifullah Sadi and Mohamad Fauzan, 2011;Basem and Bassam, 2012;Ghassan et al., 2013;Feras et al., 2013).As shown in Fig. 1 TAM introduce Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) as factors that determine an individual's intention to use a system.Perceived usefulness is seen as being directly affected by perceived ease of use. Perceived usefulness is the person's perception of how the use of a specific system will increase the job performance for the employees within an organizational context (Wixom and Todd, 2005).PEOU define the degree to which the intended user expects the target system to be free of effort (Wixom and Todd, 2005).According to the TAM model; the PU and PEOU will usually have a real impact on t h e one's attitude toward the use of a particular technology.Attempts to extend TAM have generally taken one of three approaches (Feras et al., 2013) by introducing factors from related models to TAM, by introducing additional or alternative belief factors and finally by examining moderators of perceived usefulness and perceived ease of use.During the last decade of Study importance and objectives: Compared to ecommerce, m-commerce has limited academic research available on the literature in development countries such Jordan because it is still in its early stages of development and the majority of consumers didn't has any chance to use or adopt the m-commerce in their daily lives (Basem and Bassam, 2012;Ghassan et al., 2013;Feras et al., 2013).The overall aims of this study can be categorized in two major parts: first: investigate and identify the major non-technological factors that affect the intention to use m-commerce in Jordan.Second: introduce and validate a revised and extended mobile commerce technology adaption model suitable for Jordan as development country. The study problem: This study used the Technology Acceptance Model (TAM) and the previous studies to propose a modified model then use it to explore the non-technological factors affecting the intention to adopt m-commerce in Jordan. The study hypotheses: Based on proposed model the null hypotheses of the study can be drafted as follows: H1: Perceived Usefulness (PU) has no direct significant effect on the intention to adopt mcommerce in Jordanian telecommunication company.H2: Perceived Ease of Use (PEOU) has no direct significant effect on the intention to adopt mcommerce in Jordanian telecommunication company.H3: Privacy has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H4: Compatibility of E-Commerce adoption has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H5: Government policy has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H6: Legal protection has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H7: Risk has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H8: Cost has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company.H9: Social and culture values has no direct significant effect on the intention to adopt m-commerce in Jordanian telecommunication company. The study methodology: Study population and sample: The study population is all Jordanian mobile owners and they are: Zain, Umniah and Orange.A number of questionnaires have been distributed to each company.The questionnaires were distributed to 159 persons and the number of questionnaires approved for purposes of research and analysis was 153 persons. Data collection methods: We design our own questionnaire based on the literature, then we send it to be evaluated by a number of specialized persons, then the questionnaire reliability were evaluated using Cronbach's Alpha.The questionnaire consists of eleven sections.The first section collects demographic data about questionnaires respondents in order to ensure that respondents have the necessary knowledge of computer-based information systems and questionnaire contents and are able to answer its questions.The last ten sections aim to investigate the effect of the factors (Perceived Usefulness (PU), Perceived Ease of Use (PEOU), privacy, compatibility, Government policy, legal protection, risk, cost, social-culture values and mcommerce use) in the intention to adopt m-commerce by telecommunication companies in Jordan. Data analysis methods: To achieve the study objectives and testing its hypotheses, the following statistical methods have been used: • Questionnaire reliability using Cronbach's Alpha • Model validity using loading factor and Kaiser-Mayer-Olkin (KMO) • Descriptive statistic: Frequencies, means and standard deviations have been determined and used to identify the characteristics of study sample • t-test to examine the study hypotheses Questionnaire reliability using Cronbach's alpha: For the purpose of testing and validating the questionnaire reliability Cronbash's alpha is performed and executed, Table 1 shows that the values of Cronbach's Alpha for each variable exceeded the recommended value which is 0.6 according to Cortina (1993) and Thompson and Davis (2000).Cronbach's Alpha value shows good internal consistency among scales and good reliability of the entire questionnaire. Model validity using loading factor and Kaiser-Mayer-Olkin ( KMO): To find whether the model is valid or not, we used the factor Analysis by applying Varimax procedure in this study, Varimax procedure depends on three values, the loading factor which recommended to be more than 0.4 (Heck, 2004).The KMO is the second value; which is used to measure the fitness of using factor analysis for our data, KMO is recommended to be more than 0.5 (Heck, 2004).And finally the Eigen value which recommended to be more than 1 (Heck, 2004).Table 2 shows that the three values satisfy the three recommended values, which means that the proposed model is a valid model for this study. Study sample characteristics: By analyzing the answers of the first section of the questionnaire and based on Table 3, the study sample is appropriately qualified in term of academic level, where 76.5% of the sample individuals are holders of bachelor and high studies degree.4 shows the statistical analysis for the nine factors (Perceived Usefulness (PU), Perceived Ease of use (PEOU), privacy, compatibility, Government policy, legal protection, risk, cost and social-culture values).It is obvious that the mean ranged between 3.665 and 4.064, which implies that the respondents were positively directed toward the study questions. First hypothesis testing: H1: Perceived Usefulness (PU) has no direct significant effect on the intention to adopt m-commerce in Jordanian Telecommunication Company. As the decision base indicates the rejection of null hypothesis if the value significant is >0.05 and the acceptance of alternate hypothesis if the value significant is <0.05 since 0.001 is <0.05 the alternate hypothesis is accepted, which means that Perceived Usefulness (PU) has a significant effect on the adoption of m-commerce by Jordanian telecommunication companies.Table 5 and based on R 2 value shows that PU explains 35.3% of the effect on the intention to adopt m-commerce and the increase of PU by one unit will improve the intention of adopt m-commerce by 0.629.This means that there is a positive relationship between PU and the intention to adopt m-commerce. Second hypothesis testing: H2: Perceived Ease of Use (PEOU) has no direct significant effect on the intention to adopt mcommerce in Jordanian Telecommunication Company. As shown in Table 5 the value of significant is 0.000.Since 0.000 is <0.05 the alternate hypothesis is accepted which means that perceived ease of use has a significant effect on the adoption of m-commerce by Jordanian telecommunication companies.Table 5 also shows that PEOU explain 40.1% of the effect on the intention to adopt m-commerce and the increase of PEOU by one unit will improve the intention of adopt m-commerce by 0.693.This means that there is a positive relationship between PU and the intention to adopt m-commerce. Third hypothesis testing: H3: Privacy has no direct significant effect on the intention to adopt m-commerce in Jordanian Telecommunication Company. As shown in Table 5 the value of significant is 0.000 which is less than 0.05 so the null hypothesis is rejected.So we can say that privacy has a significant effect on the adoption of M-commerce by Jordanian telecommunication companies.Also the privacy explain 75.8% of the effect on the intention to adopt mcommerce.Finally we can see positive relationship between privacy and the intention to adopt Mcommerce. Fourth hypothesis testing: H4: Compatibility of E-Commerce adoption has no direct significant effect on the intention to adopt mcommerce in Jordanian Telecommunication Company. As shown in Table 5 the value of significant is 0.022 which is less than 0.05.So alternate hypothesis is accepted, which means that compatibility has a significant effect on the adoption of m-commerce by Jordanian telecommunication companies.R 2 value shows that compatibility explain 44.4% of the effect on the intention to adopt m-commerce and the increase of compatibility by one unit will improve the intention of adopt m-commerce by 0.667.This means that there is a positive relationship between compatibility and the intention to adopt m-commerce.Fifth hypothesis testing: H5: Government policy has no direct significant effect on the intention to adopt M-commerce in Jordanian Telecommunication Company. As shown in Table 5 the value of significant is 0.013 which is less than 0.05 so alternate hypothesis is accepted.So government policy has a significant effect on the adoption of m-commerce by Jordanian telecommunication companies.Table 5 also shows that government policy explain 53.4% of the effect on the intention to adopt m-commerce and the increase of government policy by one unit will improve the intention of adopt m-commerce by 0.731.This means that there is a positive relationship between government policy and the intention to adopt m-commerce. Sixth hypothesis testing: H6: Legal protection has no direct significant effect on the intention to adopt M-commerce in Jordanian Telecommunication Company. As shown in Table 5 the value of significant is 0.000 which is <0.05 so the null hypothesis is rejected, so legal protection has a significant effect on the adoption of M-commerce by Jordanian Telecommunication Companies.Table 5 and based on R 2 value shows that legal protection explain 82.7% of the effect on the intention to adopt M-commerce and the increase of legal protection by one unit will improve the intention of adopt M-commerce by 0.802.This means that there is a positive relationship between legal protection and the intention to adopt M-commerce. Seventh hypothesis testing: H7: Risk has no direct significant effect on the intention to adopt m-commerce in Jordanian Telecommunication Company. The value of significant is 0.000 which is less than 0.05 so the null hypothesis is rejected.This means that risk has a significant effect on the adoption of mcommerce by Jordanian telecommunication companies.Risk explains 52.7% of the effect on the intention to adopt M-commerce and the increase of risk by one unit will decrease the intention of adopt m-commerce by 0.714.This means that there is a negative relationship between risk and the intention to adopt M-commerce. Eight hypothesis testing: H8: Cost has no direct significant effect on the intention to adopt m-commerce in Jordanian Telecommunication Company. The value of significant is 0.000 which is less than 0.05 so the null hypothesis is rejected.So cost has a significant effect on the adoption of M-commerce by Jordanian telecommunication companies.Table 5 also shows that cost explain 54.5% of the effect on the intention to adopt M-commerce and the increase of cost by one unit will decrease the intention of adopt Mcommerce by 0.629.This means that there is a negative relationship between cost and the intention to adopt Mcommerce. Ninth hypothesis testing: H9: Social and culture values has no direct significant effect on the intention to adopt M-commerce in Jordanian telecommunication company. As shown in Table 5 the value of significant is 0.014 which is less than 0.05 so the null hypothesis is rejected.So social and culture values has a significant effect on the adoption of M-commerce by Jordanian telecommunication companies.R 2 value means that social and culture values explain 61.1% of the of the effect on the intention to adopt m-commerce and the increase of social and culture values by one unit will improve the intention of adopt m-commerce by 0.825.This means that there is a positive relationship between social and culture values and the intention to adopt Mcommerce. CONCLUSION This research purposed new revised and extended mobile commerce technology adaption model, the model is valid, the validation was confirmed using Loading Factor and Kaiser-Mayer-Olkin (KMO).The empirical results significantly shows that Perceived Usefulness (PU), Perceived Ease of Use (PEOU), privacy, compatibility, Government policy, legal protection, risk, cost and social-culture values factors have a direct effect on the adoption of mobile commerce by Jordanian telecommunication companies.This research also find that those factors are different in their effect in the mobile commerce adoption decision where Legal protection factor has the highest impact in mobile commerce adoption decision while Perceived Usefulness (PU) factor has the lowest impact in making such decision.The result also shows that there is a positive relationship between all study factors and the intention to adapt mobile commerce except for risk and cost factors. Fig. 2 : Fig. 2: The proposed study model technology adaption research; TAM model prove its validity within the this kind of research, so we will TAM extend, validat and use the TAM model to guide our proposed M-Commerce Adoption Model.Our proposed model in Fig.2and based on TAM and the available literature will be designed to contain the following non-technological external factors: privacy, compatibility, Government policy, legal protection, risk, cost and social-culture values.In the modified proposed model in Fig.2we suggested that there is direct relationship between the PU, PEOU, privacy, compatibility, Government policy, legal protection, risk, cost and social-culture values with the intention to adopt M-Commerce. Table 5 : Results of testing hypothesis according to t-test
4,491.2
2014-04-05T00:00:00.000
[ "Business", "Computer Science" ]
Corrigendum: Methodology and search for superconductivity in the La–Si–C system In this paper we describe a methodology for the search for new superconducting materials. This consists of a parallel synthesis of a highly inhomogeneous alloy which covers large areas of the metallurgical phase diagram combined with a fast, microwave-based method which allows non-superconducting portions of the sample to be discarded. Once an inhomogeneous sample containing a minority phase superconductor is identified, we revert to well-known thorough identification methods which include standard physical and structural methods. We show how a systematic structural study helps in avoiding misidentification of new superconducting materials when there are indications from other methods of new discoveries. These ideas are applied to the La–Si–C system which exhibits promising normal state properties which are sometimes correlated with superconductivity. Although this system shows indications for the presence of a new superconducting compound, the careful analysis described here shows that the superconductivity in this system can be attributed to intermediate binary and single phases of the system. Introduction The search for materials with novel properties, new superconductors in particular, is a difficult and sometimes tedious task. It is difficult because a deliberate search for new superconducting materials in a particular system is rarely successful. As a consequence, the discovery of these new materials has been mostly accidental since the discovery of the phenomenon [1][2][3][4][5][6]. Moreover this is a tedious task because the systems under study are usually materials consisting of several elements with complex phase diagrams. The interesting novel superconducting properties generally occur in a very 8 Author to whom any correspondence should be addressed. narrow phase diagram region where superconducting and nonsuperconducting phases are likely to coexist. In some ways, the search for new materials is akin to the search for a 'needle in a haystack' in which most of the material is 'irrelevant'. Based on the past history of discoveries in superconductivity clearly some novel unconventional ideas are needed. Our method consists of a fast process for discarding most of the ('uninteresting') non-superconducting part of a multinary phase diagram. This is done by combining a parallel method for the preparation of highly inhomogeneous samples ('phase spread alloy') together with a fast, sensitive screening using magnetic field modulated microwave spectroscopy (MFMMS). Once a sample containing a minority superconducting phase is identified, comprehensive and quantitative structural, transport and magnetic methods are applied to identify the phase responsible for the superconductivity. Figure 1 shows a block diagram of the methodology we describe here. The parallel synthesis and fast screening method indicated in the diagram allows large non-superconducting areas of the phase diagram to be discarded. The samples that pass the initial screening stage are then subjected to detailed (albeit slow) magnetic, transport and structural studies to rule out known superconductors. This allows for a rational search among the many possible candidates. In spite of this, the number of possible candidates is enormous and therefore additional restrictions must be used. The initial starting point of candidate systems necessarily needs some intuitive, theoretical or past-experience-based selection. High temperature superconducting phases (for instance in cuprates) often lurk near other phase boundaries, such as metal-insulator and/or antiferromagnetic phase transitions [7,8]. They are highly anisotropic, consisting of low dimensional structures which are doped by charges from other portions of the structure. Mixed valency and charge disproportionation seem to be in many cases coincidental with high temperature superconductivity. Usually interesting materials are embedded in multiphase samples where only a small portion is responsible for the superconductivity. In fact the original discovery of high temperature superconductivity in cuprates, for instance, was found in multiphase compounds [4]. This is why an approach intentionally targeting inhomogeneous samples may increase the possibilities of finding new superconducting phases as advocated here. Description of the experimental methodology As an initial step our screening method uses a highly sensitive technique since usually the superconducting phases are only small fractions of the whole sample, as described above. Conventional superconducting quantum interference device (SQUID) magnetometry is sensitive; however, it is tedious and slow, whereas transport measurements are only useful if a superconducting percolation path is present in the sample. On the other hand, magnetic field modulated microwave spectroscopy (MFMMS) [9][10][11] is very sensitive and allows detection of minuscule superconducting regions more quickly and more sensitively than conventional methods, which is particularly useful for highly inhomogeneous systems. Therefore, MFMMS provides the first screening step and allows large parts of the phase diagram which are not superconducting to be discarded. Once an inhomogeneous sample containing a minority superconducting phase is found, it is necessary to identify the phase responsible for the superconductivity. This is usually done using a battery of tests including nanoscaled structural, chemical, physical and optical measurements. As a first and powerful approach structural refinement techniques allow identification of the various compounds present in the multiphase sample. This allows not only the pinpointing of and characterizing of new phases but also the identification of impurity phases, which may ultimately be responsible for the observed superconducting properties. X-ray powder diffraction (XRD) is a well-known structural characterization method for materials, which provides phase and structural information by indexing observed peaks, and fitting the measured intensities with model calculations. In particular, the very successful Rietveld refinement technique uses a least squares fitting approach to obtain a good match of multiphase models with measured data [12]. Laboratory-based powder XRD data provide reasonably good results, although data collected at a synchrotron radiation sources provide a higher flux and a tunable wavelength. Because data obtained at synchrotrons have smaller background contributions, larger signal to noise ratio and better instrumental calibration the results and analysis are usually more reliable. This is somewhat disadvantageous due to the limited beam time available at neutron and synchrotron facilities. However, recent developments, like the mail-in service at beamline 11-BM of the Advanced Photon Source at Argonne National Laboratory [13], enable access to high resolution, synchrotron quality, powder diffraction data in a convenient and timely fashion. Materials candidates In addition to the search methodology described above, it is useful to restrict further the possible candidate systems to decrease the large phase space available. There are a few general guidelines which can be extracted from past experience. It is safe to assume that future discoveries will arise in multi-element compounds, as proven by the recent discovery of superconductivity in the pnictides [6], although even binary alloys (such as magnesium diboride) [3] can go unnoticed for a long time. In most cases, high temperature superconductors contain light elements (such as B, C, N, O, F, S and Cl). Moreover, charge separation among substructures in the material appears, so that both ionic and metallic/covalent bondings exist side by side, as for example in layered compounds. The proximity to insulating phases (magnetic or charge-ordered) has been also observed in several compounds [7,8]. Thus as a first approach it is useful to restrict the search to compounds which are multielement, anisotropic, layered, contain light elements and with some collective order (such as antiferromagnetism) in close proximity. Satisfying simultaneously all the above-mentioned conditions is nontrivial and therefore it may be useful to start from compounds which only partially satisfy them. Moreover, it is worth mentioning that some superconducting systems such as the A15 compounds do not meet the general conditions outlined above. These conditions should be taken as a starting point in order to narrow the initial search. However, taking them as strict rules could hinder the search for many potential new superconductors. The La-Si-C system In accordance with the above-mentioned guidelines here we searched for the presence of superconductivity in the La-Si-C system. This system has some of the common features that appear in superconducting materials. It is a multi-element compound and includes the presence of a light element, in this case C. One of the binary phases, La 5 Si 3 , has a tetragonal layered structure. In addition, a closely related Nb-Si system presents superconducting behavior when it is doped with B [14]. In the binary La-Si system there are five intermetallic phases: La 5 Si 3 , La 5 Si 4 , La 3 Si 2 , LaSi 2 and LaSi [15]. Among these superconductivity is found in LaSi 2 with α-ThSi 2 crystal structure and a T C of 2.3 K [16], La 3 Si 2 with a T C near 2.1 K [17], and La 5 Si 3 with a T C at 1.6 K [18]. It has been recently found that under high pressure-high temperature synthesis it is possible to stabilize superconducting LaSi 5 and LaSi 10 with T C of 11.5 K and 6.7 K respectively [19]. The crystal structure of the rare earth silicides of general formula Re 5 Si 3 is Cr 5 B 3 -tetragonal type for the La to Nd group and Mn 5 Si 3 -hexagonal type for Sm to Lu. Based on earlier expectations C may serve as the light-element dopant and perhaps help to stabilize a hexagonal La 5 Si 3 phase as reported previously in Nd 5 Si 3 , where the addition of C or B stabilized the hexagonal phase [20,21]. Synthesis Four different polycrystalline, multiphase samples were prepared by arc-melting the constituents on a water-cooled copper hearth under purified argon atmosphere. High purity La (99.95%), Si (99.9995%), and graphite chips (99.9995%) were used to prepare samples with the following nominal compositions: La 3 Si 2 (sample 1), La 5 Si 3 C (sample 2), La 5.5 Si 3 C (sample 3), and La 5 Si 3 (sample 4). The samples were turned and remelted four times to ensure homogeneity. The total weight loss after arc-melting was less than 0.3%. Then the samples were wrapped in tantalum foil and sealed in evacuated quartz tubes for further annealing. Samples 1 and 2 were annealed at 1100 • C for three days and then subjected to rapid quenching in liquid nitrogen. Samples 3 and 4 were annealed at 600 • C for three days and then cooled to room temperature over several hours. X-ray powder diffraction (XRD) The synthesized metallic pellets were ground into a fine powder using an agate mortar and pestle to avoid preferred orientation which may produce misleading diffraction patterns. XRD was initially performed in an in-house Bruker D8 Discovery rotating x-ray diffractometer with Cu Kα radiation. High resolution synchrotron powder diffraction data were collected using beamline 11-BM at the Advanced Photon Source (APS) at Argonne National Laboratory, using a wavelength of 0.413 52Å. Discrete detectors covering an angular range from −6 • to 16 • 2θ were scanned over a 34 • 2θ range, with data points collected every 0.001 • 2θ and a scan speed of 0.01 • s −1 [13,22,23]. For the Rietveld refinement we used EXPGUI software [24], a graphical interface for the GSAS package [25]. Magnetic field modulated microwave spectroscopy (MFMMS) MFMMS is based on a measurement of temperature dependent phase sensitive microwave absorption while the sample is subjected to an AC modulated magnetic field. The AC field modulation together with phase sensitive detection produces a peak across the superconducting transition [11]. The superconducting onset temperature is correlated with the temperature at which the MFMMS signal falls below the background noise level. We used a Bruker EleXsys EPR spectrometer operating at a frequency of 9.2 GHz with the sample placed in the center of a cylindrical TE 011 cavity. The spectrometer was operated in a non-conventional mode where the microwave absorption signal was measured as a function of temperature. In the series of experiments described here a 12 Oe external DC magnetic field was applied and modulated at 100 KHz, with a peak-to-peak amplitude of 10 Oe, while the temperature was quickly ramped from 5 to 12 K. To check the sensitivity of our system we showed that we could detect 3 × 10 −11 cm 3 superconducting volumes of Nb dots prepared by electron beam lithography on a Si substrate. In a whole different series of earlier experiments and materials systems, we have shown that this method is able to detect as little as 10 −11 cm 3 of a superconducting material embedded in an otherwise non-superconducting matrix [26,27]. Moreover, this method is also much faster (sometimes by as much as a factor of ∼100) and more efficient than SQUID magnetometry 9 . Using a SQUID magnetometer, after the temperature is stabilized (typically 30 s) the magnetization of the sample is measured several times. On the other hand, MFMMS scans the microwave absorption while the temperature is rapidly ramped. For instance, a measurement SQUID To characterize in detail the superconductivity of the samples that passed the screening, zero field cooled-field cooled (ZFC-FC) magnetizations were measured using SQUID magnetometry. The applied magnetic field was 100 Oe and the temperature was scanned in the 5 to 12 K range. Results and discussion Following the methodology explained in the introduction, using MFMMS as a fast screening tool, we detected the presence of superconducting phases in the samples containing C (see figure 2). The superconducting transition temperatures are 6.5 K and 8 K for samples 2 and 3 respectively. Since these samples passed the initial screening stage and showed superconducting transitions at temperatures higher than the T C s of the La-Si binary system, they were possible candidates as new superconducting materials. Because of this they were subjected to further detailed characterization. XRD measurements were performed to obtain phase and structural information on the samples (figure 3). Since most of the intense reflections from the different La-Si phases overlap, it is difficult to distinguish between these phases without careful analysis. For clarification, we have marked in figure 3 some of the reflections arising from La 5 Si 3 and La 2 C 3 which do not overlap with other diffraction maxima. Based on an initial analysis, in which peaks corresponding to different La-Si phases were indexed, we can conclude that sample 1 is a mixture of La 3 Si 2 and La 5 Si 4 phases. Sample 2 is a multiphase compound, with La 3 Si 2 as the majority phase; the presence of La 5 Si 3 is negligible. Samples 3 and 4 are mixtures of La 5 Si 3 , La 3 Si 2 and La 5 Si 4 . For those two samples, La 5 Si 3 is the majority phase. From the ZFC-FC magnetization curves (figure 4) we conclude that sample 1 is purely diamagnetic, without exhibiting any indications of superconductivity. Samples 2, 3 and 4 exhibit superconducting transitions at 7 K, 8 K and 6 K respectively. However, the transition in the latter case is different since the ZFC and FC curves split differently from in samples 2 and 3. For samples 2 and 3 the FC is flat and remains very close to zero until it completely separates from the ZFC, which continues to be flat. On the other hand, sample 4 has a smaller magnetization and the ZFC and FC curves remain close while decreasing initially until they start separating at a lower temperature. This shows that the irreversibility temperature, generally related to melting or depinning effects, is different in these samples [28]. The microwave absorption measurements (figure 2) confirm that the magnetization behavior is due to superconductivity. Slight differences between the T C measured with SQUID and MFMMS were found in our samples. These differences between T C depend on the microwave frequency [29] and were ascribed to the coupling of microwave currents to fluxons [30], and weak links present below T C . In addition, here this may be due to temperature differences between the sample holder and the temperature sensor, which is important for the MFMMS as explained above. Since the T C of the binary La-Si family ranges from 1.6 K (La 5 Si 3 ) to 2.3 K (LaSi 2 ), these results may indicate the discovery of a new superconducting phase. This potential new superconductor, which is produced by doping C into the La-Si system, has a T C between 7 and 8 K. Another potential superconductor could be ascribed to the 6.1 K transition observed in sample 4. This sample does not contain C and exhibits a different behavior in the superconducting transition ( figure 4). To clarify the origin of superconductivity, it is essential to carefully check and investigate the presence of already known superconducting compounds which may be present as impurities or intermediate phases. To obtain quantitative results from x-ray synchrotron data we performed Rietveld refinement. Figure 5 shows the observed and calculated XRD profiles for one of the samples (number 3). The 'chi squared' (χ 2 ), 'weighted profile R-factor' (R wp ) and phase weight fractions of the significant compounds found are given in table 1. All possible phases containing La, Si and C as single elements, their binary and ternary combinations and possible oxides formed during the syntheses were taken into account in the refinement process. The last issue is particularly important for the case of La, an element that quickly oxidizes. Only minute fractions of La 2 O 3 were found in the x-ray diffraction patterns of samples 2 and 3. The values of χ 2 displayed in table 1 are far from the ideal value of 1, especially for sample 2, which is usually the case for high precision data [31]. Rietveld refinement using the data acquired in our laboratory-based diffractometer, gives χ 2 very close to 1. This is partially due to the lower resolution of the lab-based data, and also reflects the larger fitted background contribution needed for a lower intensity source. Also, the random error in each synchrotron acquired data point is small; therefore small disagreements in the fit become more significant when comparing experimental and calculated patterns. Even with these drawbacks, it is preferable to work with the synchrotron data, because the resolution is better and Rietveld refinement results are more reliable since the instrument is carefully calibrated. From the analysis described above we found clear indication for the presence of the phases displayed in table 2. There is no evidence of La 5 Si 3 in Cr 5 B 3 -prototype hexagonal structure, P63/mcm (193), in any of the samples. This hexagonal phase may be stabilized in the Nd 5 Si 3 system when it is doped with C and B [20,21] and in the superconducting (T C = 7.8 K) Nb 5 Si 3 system doped with B [14]. The results from sample 2 (table 1) indicate that the superconductivity is not caused by the presence of the layered La 5 Si 3 phase since even the sample composed mainly of La 3 Si 2 shows a T C of 7 K. Sample 1, without C, does not display any signs of superconductivity. Therefore the T C s in samples 2 and 3 (7 and 8 K) are related to the presence of C. Sample 4, a sample without C as a starting material, also shows traces of superconductivity. Nevertheless, the T C of 6.1 K, the shape of the transition as well the values of the magnetization (figure 4) are different from the other superconducting samples. So the origin of this transition temperature could be different from samples 2 and 3. The T C around 6 K is very close to the La β transition temperature and including it in the Rietveld refinement improves the fit. Therefore the superconductivity in sample 4 is ascribed to the La β phase. For samples 2 and 3 containing C as a starting material an improved refinement is obtained if the La 2 C 3 compound is included as one of the phases. The superconductivity of this compound is strongly dependent on the C deficiency, with T C ranging from 5.5 to 13 K for compositions La 2 C 3−x , with x = 0.27 and 0 respectively [32]. The maxima in the diffraction patterns at Q values of 1.71 and 1.97Å −1 , marked with the symbols in figure 3, could not be attributed to any phases of the La-Si system. The values are in the same positions as the allowed reflections coming from La 2 C 3 assuming a lattice expansion of 2.5%. Also the diffraction pattern presents maxima at La 2 C 3 angle reflections, assuming a lattice expansion of 1.3%. Rietveld refinement implies that the concentrations of the La 3 C 2 phase are 23.5% and 0.67% for samples 2 and 3, respectively (table 1). From SQUID magnetometry, at 5 K and 100 Oe the shielding moments are −0.4 emu g −1 and −0.04 emu g −1 for samples 2 and 3, respectively (figures 4(b) and (c)). These values of the magnetization correspond to shielding fractions of 28% and 2.8% (assuming a density of 5.7 g cm −3 ). These shielding fraction values are close to the phase concentration percentages, indicating that the superconductivity is related to the presence of the La 3 C 2 phase. The small differences in these values could be ascribed to the inhomogeneous distribution of the La 2 C 3 phase in the samples. Thus the occurrence of superconductivity in samples 2 and 3 is ascribed to the presence of La 2 C 3 , which is further supported by the Rietveld refinement and superconducting shielding fraction obtained from SQUID magnetometry. So this superconductivity could neither be ascribed to new crystal structures formed by the addition of C nor to the doping of the layered La 5 Si 3 structure with a light element. The different transition temperatures may be related to different La 2 C 3−x compositions [32] with the larger C deficiency leading to the lower T C . Assuming that larger Si replace C atoms in La 2 C 3−x , the T C variations are consistent with the lattice expansions. Sample 2 presents a lower T C and a larger lattice expansion, whereas sample 3 has a higher T C and a smaller lattice expansion. Conclusions In conclusion, we have described a new approach for the search for new high T C superconducting materials. We have applied this method to the La-Si-C system, which shows normal state properties similar to high T C superconductors. Fast screening indicates the presence of possible new superconducting phases. However, a careful structural analysis implies that the superconductivity may be ascribed to the presence of the already known La 2 C 3 and La β superconductors. As a general rule, when the T C of multi-element materials is below or close to the T C of any of the single elements or partial compounds, the results should be taken with extreme caution. All checks must be made in order to avoid mistaken conclusions. Although no new superconductivity has been found in the La-Si-C system yet, we identified the origin of the superconductivity using the methodology described above. This shows the power of this method for future searches for new superconducting phases.
5,332.6
2011-01-01T00:00:00.000
[ "Physics" ]
Identifying Human Phenotype Terms by Combining Machine Learning and Validation Rules Named-Entity Recognition is commonly used to identify biological entities such as proteins, genes, and chemical compounds found in scientific articles. The Human Phenotype Ontology (HPO) is an ontology that provides a standardized vocabulary for phenotypic abnormalities found in human diseases. This article presents the Identifying Human Phenotypes (IHP) system, tuned to recognize HPO entities in unstructured text. IHP uses Stanford CoreNLP for text processing and applies Conditional Random Fields trained with a rich feature set, which includes linguistic, orthographic, morphologic, lexical, and context features created for the machine learning-based classifier. However, the main novelty of IHP is its validation step based on a set of carefully crafted manual rules, such as the negative connotation analysis, that combined with a dictionary can filter incorrectly identified entities, find missed entities, and combine adjacent entities. The performance of IHP was evaluated using the recently published HPO Gold Standardized Corpora (GSC), where the system Bio-LarK CR obtained the best F-measure of 0.56. IHP achieved an F-measure of 0.65 on the GSC. Due to inconsistencies found in the GSC, an extended version of the GSC was created, adding 881 entities and modifying 4 entities. IHP achieved an F-measure of 0.863 on the new GSC. Introduction Text mining techniques are essential to deal with the large amount of biomedical literature published every day [1]. One of their contributions is the ability to identify terms in literature that are represented in biomedical ontologies [2]. The Human Phenotype Ontology (HPO) [3] is an ontology that provides a standardized vocabulary for phenotypic abnormalities found in human diseases. The information on this ontology can facilitate the understanding of medical texts, such as electronic health records. However, the recognition of HPO entities in text is a nontrivial task. HPO entities span from simple to highly complex and descriptive entities, which range from 1 to 14 words. Groza et al. [4] provided a study on the complex nature of HPO entities and released a Gold Standard Corpora (GSC) to measure the performance of state-of-the-art Named-Entity Recognition (NER) systems. The top system was Bio-LarK CR that achieved an -measure of 0.56. This work presents our NER system, dubbed Identifying Human Phenotypes (IHP), which combines machine learning and validation rules to increase the performance to a more acceptable level. IHP adopted the framework provided by IBEnt [5] that uses Stanford CoreNLP [6] for text processing and applies Conditional Random Fields (CRFs) for the identification of entities. IHP uses a rich feature set (linguistic, morphological, orthographic, lexical, context, and other features) based on previous works in Biomedical NER, adapted for the identification of HPO entities. It also applies a validation stage based on manual rules, such as negative connotation analysis, which in combination with a dictionary can remove false positives, identify false negatives, and combine adjacent entities. IHP outperformed Bio-LarK CR in the GSC by achieving an -measure of 0.65. However, to fully understand why IHP did not achieve a larger improvement, we manually analyzed the errors produced by IHP and found some problems in the GSC, mainly missing entities. Thus, we created an extended version of the GSC (GSC+) that added 881 entities and modified 4 entities. Using the GSC+, IHP achieved an -measure of 0.86, which corresponds to a substantial increase (0.30) from the previous top -measure, 0.56. The remainder of this article will detail the data and methods used, the results obtained, and their discussion. Both the GSC+ and IHP source code are available at https://github.com/lasigeBioTM/IHP. Gold Standard Corpora. In 2015, Groza et al. [4] provided a unique corpus for HPO, dubbed Gold Standard Corpora (GSC). It consists of 228 manually annotated abstracts, containing a total of 1933 annotations and covering 460 unique HPO entities. These 228 abstracts were manually selected to cover 44 complex dysmorphology syndromes analyzed in a previous HPO study [7]. The GSC includes a high diversity of nontrivial matches due to the complexity of the lexical structure of phenotype expressions. For example, in the text "no kidney anomalies were found," the NER system should be able to recognize the term HP:0000077 (abnormality of the kidney). For each document file in the GSC (one line without a title), there is a corresponding annotation file (as many lines as the number of annotated entities). The annotation is given by three columns: the exact matching character offset, the HPO accession number, and the annotation text (e.g., "[27::42] HP_0000110 | renal dysplasia"). Besides releasing the GSC, the authors performed a comprehensive evaluation of three NER systems: the NCBO Annotator [8], the OBO Annotator [9], and the Bio-LarK CR [4]. Bio-LarK CR, just like IHP, is a recognition system with the objective of identifying HPO entities, while the NCBO Annotator and the OBO Annotator identify biomedical entities from several ontologies. Of these three annotators, Bio-LarK CR is the only one able to find complex HPO references, since it was developed with HPO as the main target. Thus, as expected, Bio-LarK CR was the top performer achieving an -measure of 0.56, with a recall of 0.49 and a precision of 0.65. However, the performance of Bio-LarK CR was not substantially higher than the OBO Annotator that achieved an -measure of 0.54. Given that these values ofmeasure are still far from being perfect, there is a need for NER systems that could enhance these levels of performance, for example, by employing machine learning techniques. HPO Benchmark Annotators. In this work, we implemented three HPO annotators in order to test an updated GSC, the GSC+. We used the OBO Annotator, the NCBO Annotator, and the Minimal Named-Entity Recognizer (MER) [10]. The NCBO Annotator is able to annotate text with relevant ontology concepts from the great number of ontologies available in BioPortal (https://bioportal.bioontology.org/), which is largest repository of biomedical ontologies. We used the NCBO API (http://data.bioontology.org/ documentation) targeted towards the HPO. The OBO Annotator is a semantic Natural Language Processing tool capable of combining any number of OBO ontologies from the OBO foundry to identify their terms in a given text. A particular HPO-specific version of this tool was used (available at http://www.usc.es/keam/PhenotypeAnnotation/OBOAnno-tatorJAR.zip), which specifically targets HPO entities. To match input text against HPO terms this tool uses two types of prebuilt inverted indexes: lexical and contextual. It provides a graphical interface which allows for an easy annotation of abstracts. MER is a very simple NER tool that given a lexicon (text file with terms representing the entities of interest) and an input text, it returns a list of the annotated entities. It provides an API (available at http://labs.fc.ul.pt/mer/), which already includes some particular lexicons, including the HPO. To test these annotators, all the abstracts in the GSC were used as the input. The results obtained from each tool were formatted into an identical GSC format and compared to obtain the precision, recall, and -measure. We attempted to execute Bio-LarK CR; however, some of the external links it uses internally are not available anymore. Bio-LarK CR source code has not been updated in the last years, and therefore we were unfortunately unable to make it functional. Nonetheless, it is worth remembering that Bio-LarK CRmeasure was only 0.02 higher than OBO Annotator and thus the impact of not using Bio-LarK CR is minimal in our work. Identification of Human Phenotypes. IHP relies on Stanford CoreNLP [6] and on an implementation of Conditional Random Fields (CRFs) provided by CRFSuite [11]. Biomedical NER systems commonly apply CRFs which are a type of probabilistic model capable of labeling a sequence of tokens (sequence of characters with a specific meaning, such as a word, symbol, or number) and producing an output of annotated named entities (word phrases that can be classified in a particular category). CRFs can include a rich feature set that, given a sequence and the corresponding labels, can obtain the conditional probability of a state sequence (a label) given a certain input sequence. In this case, the label represents the words that are part of a named entity. These models need to be trained on a training set. The trained model is able to label sequences of tokens with the most probable labels, according to that model [12]. CRFSuite was applied with a l2sgd algorithm (Stochastic Gradient Descent with L2 regularization term). The standard algorithm settings of this tool were kept, except for an adjustment of the L1 coefficient value due to a slight improvement in performance and the addition of two Boolean settings ("feature.possible_states" and "feature.possible_transitions") which affect the way features are generated and can improve the labeling accuracy. The algorithm was used with the following settings: (iii) feature.possible_states -"1" (iv) feature.possible_transitions -"1" Figure 1 presents the annotation pipeline of IHP. The process starts by loading GSC into IHP to be divided into a training and a testing set. The resulting sets are used to create a model with CRFSuite and a specific feature set. After the annotation process, there is a validation stage, in which a combination of a dictionary and manual rules (e.g., negative connotation analysis) is used to remove false positives, identify missed entities, and combine adjacent entities. In the evaluation Figure 1: Layout of IHP's annotation pipeline. IHP requires as input a Gold Standard Corpora that will serve as a training set for the CRFSuite and to evaluate IHP performance in the end; a feature set to use in CRFSuite; and a list of rules a dictionary to solve potential errors. stage, the results are calculated, returning the precision and recall of the annotation process. IHP uses StanfordCoreNLP and GeniaSS [13] (GENIA Sentence Splitter) to preprocess the text. GeniaSS was used with the default parameters. During the training stage, a model was created using CRFSuite, applying a 10-fold cross-validation technique on the GSC. For the creation of the model, a manually crafted feature set (linguistic, orthographic, morphological, context, lexicon, and other features) was selected according to the feature performance and according to the work of previous authors in the area of Biomedical NER [14][15][16][17][18][19][20][21]. This feature set is available at GitHub along with the annotator. The entire feature set includes the following: (i) Linguistic Features. Lemma and part-of-speech tags of the current token. (ii) Orthographic Features. Word case of the current token and presence of symbols (digits, left bracket, right bracket, slash, dash, quote, double-quote, left parenthesis, right parenthesis) in the token. (iii) Morphological Features. Prefixes (with length from 2 and 3), suffixes (with length from 1 to 4), word shape, and bigrams of the current token. (iv) Context Features. Lemma with a window size of 2, part-of-speech tags with a window size of 4, word shape with a window size of 2, prefixes (with length from 1 and 4) with a window size of 1, and suffixes (with length from 1 and 4) with a window size of 1. (v) Lexical Features. Stop words with a window size of 4. (vi) Other Features. Brown cluster representation of current token and classification of length class of word. Validation. The validation stage uses a combination of manual rules and a dictionary to remove false positives, identify missed entities, and combine adjacent entities. The rules and dictionary are available at https://github.com/lasige-BioTM/IHP. The dictionary contains all the terms and synonyms from the HPO database, as well as the training set annotations. The dictionary was processed to increase the amount of entity variations (e.g., for "abnormalities of the kidney" also add "kidney abnormalities" and vice versa). The manual rules work in combination with the dictionary and lists of words. The annotations in the testing set are not used in any of the rules to avoid bias issues. These word lists contain common HPO words, common part-of-speech (POS) tags, and stop words. These word lists can be found at https://github.com/lasigeBioTM/IHP/tree/master/src/other/ word_lists. The rules were developed specifically avoiding the testing set, that is, the rules use information from the training set and the HPO database to create a dictionary, but never from the testing set. The rules can be divided into two categories: identification of entities and removal of entities. The rules of each category will now be described. Further description of how the rules were developed can be found at the GitHub repository pointed above. Identification of Entities (i) Dictionaries Entities. It identifies dictionary entities using exact matching. (ii) Entity Variations. It finds specific entity structures by considering a set of common HPO nouns (e.g., "abnormalities" and "malformations") and possible variations of the next tokens in the sentence (e.g., "of," "of the," and "in the"). Using these structures, it then tries to match nouns (e.g., "abnormalities of the ear") or a group of adjectives (e.g., "defects of the outer, middle, and inner ear"). (iii) Longer Entities. It works similarly to the previous rule; however, instead of finding structures in the sentence, it uses entities from the results set and the dictionary as the base point. After finding these entities in the sentence, it tries to expand the entity boundaries (to the left or to the right) by identifying certain words and POS tags. For example, if "rib anomalies" was previously identified, it would identify "spine and rib anomalies" by expanding to the right, identifying the word "and" and the noun "spine." (iv) Smaller Entities. It checks if an entity can be separated into more entities by identifying specific words and POS tags (e.g., identification of the entity "pits of the palms" inside "pits of the palms and soles"). (v) Second Validation. A second validation process is performed using the list of previously identified entities. This list allows the rules to work on a larger number of entities. Removal of Entities (i) General Errors. It removes entities with obvious mistakes such as entities formed only by digits, entities containing only a single quote/parenthesis, entities smaller than a total character length of 3, and entities containing more than one specific type of common noun (e.g., "abnormalities" or "malformations"). (ii) Incorrect Structure. It checks the POS tags in an entity and identifies possible errors, removing entities that end in commas, dots, prepositions, and determiners. HPO entities have a negative connotation because they refer to diseases and irregularities. This technique works at a small scale, removing only entities smaller than a length of 3 words. For example, the noun "development" occurs always in conjunction with another word (e.g., "cognitive development"). If this word is found inside an entity, the entity could only have a negative connotation with an additional word (e.g., "cognitive development impairment"). This rule removes entities that follow two conditions: entity has 2 tokens and the entity contains a noun with a positive connotation. (iv) Stop Words. It uses two types of stop word lists with different levels of exclusion. One list contains word phrases that remove entities using exact matching and the other list contains word phrases that remove entities that contain that word phrase in any part of the entity (partial match). Results Using the GSC and a 10-fold cross-validation, IHP achieved an -measure of 0.65. Table 1 shows that IHP outperforms the comparative annotator in the GSC (increase of 0.09) due to the selected features set and the validation process. It is also important to remember that despite avoiding overfitting IHP was developed using GSC results as a reference. Feature Performance. To study the feature performance, each feature was tested individually to check its impact. The feature performance results are divided into six categories: linguistic, orthographic, morphological, context, lexical, and others. To minimize the impact of any collinearity issues that may exist we obtained both the results for individual performance in each category, as well as the incremental contribution of each category to the overall performance. The features were ordered from simple features, such as Linguistic and Orthographic, which use the basic structure of a token, to more complex features such as Context and Lexical, which use a larger number of tokens. Although a complete study of feature performance would require every possible combination to be evaluated, we assumed this order to minimize the running time of our experiments. The baseline feature corresponds to the results when only the current token text is considered. Each feature was tested on a single cross-validation iteration. This greedy approach may create some sampling bias and overfitting; however, the objective is to determine which features are more suited for HPO entities and not to analyze the absolute values of precision and recall. Table 2 shows an improvement in -measure as the features are added to the annotator. This increase in performance is mainly due to an increase in recall since the precision stays relatively the same. Context features have the best performance individually, followed by morphological and linguistic features, and together, they are responsible for most of the performance. Lexical and other features show a smaller effect on the performance. Validation Rules Performance. During the validation stage, a combination of manual rules and a dictionary was used to correct some of the errors made by the recognition system. Table 3 shows that although the validation step has a low impact on the -measure, there is a clear increase in the recall and a decrease in precision. This translates to more entities in the GSC being correctly identified but also more word phrases being incorrectly considered as entities. It is possible to see that the identification validation rules are responsible for the increase in recall, while the removal validation rules are responsible for the increase in precision. GSC+. It is common that some entities in a document may not be detected during the manual annotation process, leaving a gold standard corpus incomplete. This underannotation by the curators may lead to the automatic identification of many false positive which may in fact be correct annotations that were not identified during the manual annotation process, as discussed in [21]. We found that there are some inconsistencies in certain aspects of the GSC, such as the number of times an entity is annotated and the simultaneous annotation of superclass and subclass entities. An example of an inconsistent superclass/subclass entity annotation occurs with the superclass entity "tumours" and the subclass entities "tumours of the nervous system" (document 2888021 in the GSC) and "intracranial tumours" (document 3134615 in the GSC). In the former case, the GSC annotates "tumours" and "tumours of the nervous system" as entities; however, in the latter case, only "intracranial tumours" is annotated. To understand if there was really an underannotation problem in the GSC, we conducted a test which filters false positives from the results, removing the entities that are not in the GSC test set annotations but that exist in a dictionary containing HPO entities (created from the HPO database and GSC annotations). The filtered false positives are word phrases that should be considered as true positives. Using the previous example, since the entity "tumours" is not considered in the GSC but exists in the HPO dictionary, it would be removed from the results. Table 4 shows an increase in performance of about 0.17, which is a significant improvement. To address these issues, we updated GSC, dubbed GSC+ (https://github.com/lasigeBioTM/IHP/blob/master/GSC+.rar) taking into account the inconsistencies found. The GSC+ adds new instances of HPO entities that were automatically identified by IHP. Using the list of identified entities, the entities were checked by exact matching to see if they exist either in the HPO database or in the GSC annotations. The Table 5 shows the results on the GSC+. It shows that IHP has the best performance amongst the annotators, having an -measure 0.31 higher than the best performing annotator. Table 2 shows the importance of the selected features and shows that linguistic, morphological, and context features are responsible for the best performance individually and that context features show the best individual performance between the three. The most likely explanation for this is that these features take the neighbor tokens into account and therefore gather more valuable information, allowing the system to perform better using context features over the other types. The performance improves steadily with the addition of features, focusing mainly on an increase in recall. This increase in recall, but not in precision, means that although more entities are being correctly identified, there are also incorrect word phrases being considered entities. Feature Performance. Before the addition of the features, IHP incorrectly identified word phrases such as "36 schwannomas" (instead of "schwannomas") and "jerky movements" (instead of "atactic jerky movements"). After the addition of the features, it was able to correct the previous mistakes "schwannomas" and "atactic jerky movements." Although it corrected some of the errors, it also identified some incorrect word phrases such as "neuroanatomy" and "hippocampus," probably due to those words being used in other HPO entities. Validation Performance. The validation process is important for NER systems. We used a combination of manual rules and a dictionary to address some issues of the machine learning classifier. This process removed false positives, identified missed entities, and combined adjacent entities. The developed rules prioritize recall, trying to identify as many entities as possible, according to specific syntactical structures commonly found in HPO entities. With Identification Rules leads to an increase of 0.19 in recall in comparison with No Validation Rules and remains relatively the same after With Removal Rules. Since giving priority to recall leads to the identification of incorrect entities, a removal process is applied afterwards to remove the misidentified entities, improving the precision of the annotator. As seen in the previous example, many of these incorrect word phrases are caused by words that are used in HPO entities such as "hippocampus." The application of removal rules, such as the use of stop words with different levels of exclusion, helps remove these types of entities. For example, if "hippocampus" is in a stop word list that works with exact matching, it would eliminate the word phrase "hippocampus" but it would not remove a word phrase such as "enlarged hippocampus." The removal process leads to an increase of 0.08 in precision in comparison with No Validation Rules and an increase of 0.9 in precision after the use of Identification Rules. Although the -measure remains practically the same before and after the use of all the validation rules, there is a clear increase in recall and decrease in precision. The reason for the low precision values is due to the inconsistent annotation of the GSC and due to IHP's attempt to identify as many HPO entities as possible. It is important to note that IHP was developed using the GSC results as a reference for the performance, meaning that there is a certain degree of bias. However, the manual rules were developed considering the general HPO entity structure. These rules also try to identify all instances of HPO entities in an abstract, which is something that does not always happen in the GSC. Therefore, we tried to avoid overfitting for this particular dataset when developing the rules, so that IHP can have a similar performance in other contexts. GSC Inconsistencies. The original GSC contains inconsistencies that could bring confusion to the machine learningbased annotator. By inconsistency we mean similar entity mentions that were annotated differently in multiple locations of a given corpus. The inconsistencies found in the GSC can be divided into four different types: number of annotations, entity meaning, nested entities, and superclass/subclass entities. Some examples of these inconsistencies are presented below. (i) Number of Annotations. The number of times an entity is identified in a document is inconsistent. For example, the entity "preauricular pits" (document 998578 in the GSC) is used three times during the text and is annotated all three times. In another situation, the entity "medulloblastoma" (document 19533801 in the GSC) is also used three times during the text but it is only annotated twice. (ii) Entity Meaning. In some situations, annotated entities do not exactly match their meaning in the ontology, which can lead to some entities being misidentified. An example is the annotation of "calcium metabolism" (document 6882181 in the GSC) instead of "disturbance of calcium metabolism." The entity "calcium metabolism" by itself does not have any meaning in the HPO because it does not correspond to any abnormality. (iii) Nested Entities. Nested entities are entities that are contained within other entities. In the GSC some of the entities that are nested inside another entity are annotated while other times they are not. An example of this occurs in the entity "skin and genital anomalies" (document 12219090 in the GSC) and "spine and rib anomalies" (document 9096761 in the GSC). The entity "spine and rib anomalies" is annotated in the GSC, along with the entity "rib anomalies." However, the same is not true for the entity "skin and genital anomalies." This entity is annotated in the GSC (accession number: HP_0000078) but the entity "genital anomalies," which exists in the HPO, is not. (iv) Superclass/Subclass Entities. The final type of inconsistency found has to do with superclass/subclass entities. This is closely related to nested entities because it also involves the identification of entities inside other entities. It is possible that some annotators identify only the most specific class (the subclass) of a certain entity, while others try to identify all the possible classes. However, the GSC is not consistent with the annotation of these types of entities. An example of this occurs with the superclass entity "tumours" and the subclass entities "tumours of the nervous system" (document 2888021 in the GSC) and "intracranial tumours" (document 3134615 in the GSC). In the first case, the GSC annotates both "tumours" and "tumours of the nervous system" as entities. In the second case, only "intracranial tumours" is considered an entity. GSC+. IHP tries to identify all instances of HPO entities (normal, nested, and subclass/superclass entities), independently of the number of times they appear in the text. Having the correct number of times entities appear in a document can be useful for calculating important values such as the term frequency, which is used to determine the importance of a term in a document. Since IHP tries to annotate as many entities as possible, it will identify a lot of entities that are not in the GSC. This, of course, will cause a decrease in precision and therefore in the overall performance. Table 4 presents the results from the conducted test to evaluate the potential of IHP in case these inconsistencies were not an issue. Since the test removes from the results all instances of false positives that exist either in the GSC annotations or in the HPO database (by exact matching), there is an increase in precision. The results show that IHP has the potential of achieving an -measure of about 0.82, corresponding to an increase of 0.18 in comparison to the achieved results. This increase suggests that almost a fifth of the annotator's performance can be affected by inconsistencies. In order to determine IHP's performance in a situation where these inconsistencies were not an issue, the GSC+ was developed in order to provide a more consistent annotation of the abstracts. The development of the GSC+ involves the addition of automatic annotations from the IHP that were matched (with exact matching) with word phrases in the HPO database and GSC annotations. The GSC+ disentangles some of GSC's faults since it adds entities that already were considered HPO entities. Further testing may be conducted in the future to test the effects of the IHP's feature sets and validation step on the GSC+. The results of IHP in Table 5 were obtained using the same annotation process as with the original GSC. It achieved an -measure of 0.863, an increase of more than 0.04 to its potential performance discussed above, due to the fact that GSC+ includes more improvements than just filtering false positives. IHP had a higher performance than the other annotators on the GSC+. The results show that all three annotators have a lower recall than precision, meaning that they identify a low number of entities in comparison to the total entities in the GSC+. While these annotators all use the HPO as the target ontology to annotate the text, they most likely use exact matching to identify entities in the text and therefore are not prepared for the level of syntactical variation that occurs in HPO entities. Although, IHP is not necessarily capable of identifying entities as long as 14 words, it tries to do it by using validation rules that expand the boundaries of identified entities. Annotators that use exact matching are not able to identify these types of entities since all the words in the entity would have to exactly match the string on the HPO Ontology, which is unlikely. We did not evaluate each validation rule individually as was presented before with GSC, but we expect their impact to be highly similar in GSC+ since we are using the same corpus and ontology. Another issue of these annotators is the choice of identifying subclass entities over superclass ones. Some annotators, like the OBO Annotator, prefer more specific annotations than more general ones and, therefore, will only identify a portion of those entities. The reason IHP had a better performance than the other annotators on the GSC+ is that it tries to annotate all instances of HPO entities. We can also see this by comparison of the results in the GSC and the GSC+. There is an increase in precision because all the entities that were previously seen as false positives (and that exist in the HPO database) are now considered true positives. Conclusion We presented IHP, an efficient system for identifying HPO entities in unstructured text, which uses a machine learningbased approach for the identification of entities coupled with a validation technique that combines dictionary-based and manual rules-based methods. IHP outperforms state-of-theart HPO annotators like Bio-LarK CR in the GSC. This work provides a rich feature set (linguistic, morphologic, orthographic, context, lexical, and other features) for the identification of biomedical entities created based on the work of previous authors and a group of validation rules that are used to fix errors caused by the machine learning-based annotator. This work also provides an analysis of the inconsistencies found in the GSC. With this analysis, an extended version of GSC, the GSC+, was created which will be left as a contribution. The new annotations were added automatically by using IHP, so GSC+ may still contain some issues, but since they have been exactly matched with curated annotations we believe that there is no much room for errors. This new corpus can be used for further evaluations of HPO annotators and other applications, such as term frequency analysis. The GSC+ was used to test IHP and other three annotators to provide a more reliable benchmarking tool for HPO annotators. IHP outperformed all three annotators with a substantial performance margin of more than 0.3 inmeasure. In the future, it would be interesting to improve IHP's performance by defining a richer feature set that could allow the identification of more complex entities and by further evaluating and enhancing each validation rule when applied to other domains. It is worth pointing out again that GSC+ is not free of potential misannotations caused by IHP errors, and therefore as future work we aim at conducting a more thorough extension and validation of GSC+ by having the direct contributions from HPO annotators. Furthermore, a larger annotated corpus would allow IHP to deal with a wider number of real-world scenarios. It would also be interesting to apply phenotypic similarity to identify potential misannotations that are not semantically related to the entities found in the text [21]. Since electronic health records contain HPO terms, an exciting challenge would be to assess the performance of IHP in a multilingual corpus [22] and how it could help us to represent their knowledge using linked data technologies [23]. Conflicts of Interest The authors declare that they have no conflicts of interest.
7,417.2
2017-11-09T00:00:00.000
[ "Computer Science" ]
Hydraulic Performance of an Innovative Breakwater for Overtopping Wave Energy Conversion : The Overtopping BReakwaterfor Energy Conversion (OBREC) is an overtopping wave energy converter, totally embedded in traditional rubble mound breakwaters. The device consists of a reinforced concrete front reservoir designed with the aim of capturing the wave overtopping in order to produce electricity. The energy is extracted through low head turbines, using the difference between the water levels in the reservoir and the sea water level. This paper analyzes the OBREC hydraulic performances based on physical 2D model tests carried out at Aalborg University (DK). The analysis of the results has led to an improvement in the overall knowledge of the device behavior, completing the main observations from the complementary tests campaign carried out in 2012 in the same wave flume. New prediction formula are presented for wave reflection, the overtopping rate inside the front reservoir and at the rear side of the structure. Such methods have been used to design the first OBREC prototype breakwater in operation since Introduction Energy consumption has been one of the most salient ways of measuring progress in society.This is especially true nowadays: fossil fuel is not only a cultural phenomenon; it is an economic necessity for many developing/developed countries.However, new parameters, such as energy efficiency, are beginning to be used to estimate the well-being of individual states.In this contest, the renewable energy share of global consumption represents a world-wide index by which it is possible to assess the technological advancement of a country. Among the various renewable energy sources, ocean energy has attracted the attention of the business and scientific community from as early as 1973.The main reason is that the resource is so vast [1,2], i.e., the theoretical potential resource of ocean energy is more than sufficient to meet present and projected global electricity demands well into the future. In the last decade, numerous research projects aimed both at evaluating the potential energy and also designing new types of converters (called Wave Energy Converters (WECs)) were carried out.This interest is motivated by the various advantages that characterize such a source [3]: the high energy density, greater than that of solar and wind; the easy prediction of the wave characteristics through numerical models [4,5]; the reduced energy loss during wave propagation in relative water depth.However, these benefits are offset by the following drawbacks: the high variability of the wave characteristic through time [6]; WECs are exposed to large environmental forces; high production costs compared to other devices as photovoltaic and wind turbines [7]. A solution to significantly decrease the costs would be to develop hybrid devices that can be embedded within coastal or offshore infrastructures [8][9][10].This important new concept for coastal defense structures could be a realistic solution for the WEC systems to become economically competitive with other renewable energy devices, especially considering the fact that they can be integrated within existing breakwaters or by upgrades.This integration has several advantages from an economical, constructional and operational point of view.Construction costs are shared, and the access to construction, operation and maintenance of the devices would become much easier.Among the large number of WEC technologies, only very few devices have been constructed at the prototype scale, and not one is at the commercial stage. The utilization of wave energy close to the shoreline is attractive thanks to reduced costs concerning construction, access, maintenance and grid connection.On the other hand, the available wave energy is less than at deep water sites, although recent work (e.g., [11][12][13]) demonstrated that there are sites where energy is concentrated due to wave transformation phenomena, such as wave refraction. OBREC is a nearshore device combining rubble mound breakwaters with WECs.The device is able to extract energy through the wave overtopping phenomenon.Instead of dissipating the incoming wave energy on the breakwater armor layer, OBREC uses a concrete ramp in order to increase the overtopping discharge and a front reservoir designed to capture the wave overtopping in order to convert wave energy into potential energy.Water stored in the reservoir produces energy by flowing through low head hydraulic turbines, using the difference in water level between the reservoir and the main sea water level (Figure 1c). To estimate the hydraulic and the structural performance of OBREC, 2D physical model tests were carried out at Aalborg University (DK) in 2012 (AAU12) [8,11,23]. The AAU12 tests were aimed at estimating the main differences between a traditional rubble mound breakwater and OBREC.Two different configurations, characterized by different ramp lengths, respectively, 7.5 cm and 12.5 cm at the model scale, were tested.Hydraulic results showed that the integration of the device in a traditional breakwater improves the overall performances [8].As regards the hydraulic performances, the main results are: (1) the device shows a similar or even reduced reflection coefficient with respect to traditional rubble mound breakwater; (2) overtopping at the rear side of the structure is reduced by adopting appropriate precautions, e.g., the realization of a parapet at the crest of the OBREC crown wall; (3) new design methods have been proposed for the estimation of the reflection coefficient, overtopping at the rear side of the structure and overtopping volume in the front reservoir. However, the influence of some factors were not evaluated during the AAU12 tests.Indeed, the AAU12 tests did not allow for an understanding of how the hydraulic performance (reflection and wave overtopping) could be affected by the following geometrical characteristics: reservoir width, frontal ramp length and frontal ramp shape.In order to evaluate the influence of such parameters in the hydraulic performances of the device, new tests (AAU14) were carried out.The AAU14 tests were carried out at the Hydraulic and Coastal Engineering Laboratory of Aalborg University at a length scale of 1:30 (Froude scaling) compared to typical prototype dimensions.Different geometric configurations were investigated by varying the width of the reservoir, the water level and the profile of the frontal ramp.A few preliminary results on hydraulic performances have already been presented by Iuppa et al. [24]. This paper is organized as follows: Section 2 provides information on the experimental procedure and setup.In Section 3, the hydraulic performances of the device are evaluated: the reflection coefficient of the structure, overtopping at the rear side of the structure and the overtopping volume in the front reservoir.Section 4 is devoted to an overall discussion with some concluding remarks. Wave Flume The model tests were carried out at the Hydraulic and Coastal Engineering Laboratory of Aalborg University at a length scale of 1:30 (Froude scaling) compared to the typical prototype dimensions.The wave flume is 25 m long, 1.50 m wide and 1.20 m deep.The flume configuration is shown in Figure 2. Moving from the paddle (a hydraulic-driven piston mode generator) to the model, the bottom was horizontal for the first 6.5 m, with a 3.5-cm step, a 1:98 slope section with a length of 9 m and, finally, a horizontal section where the model was placed.The flume was divided into two sub-flumes by a guiding wall in order to test two different device configurations.Each part was 0.73 m wide. Tested Configurations The tests were performed by varying: (i) the shape of the frontal ramp; (ii) the height of the frontal ramp, R r , with respect to the still water level (SWL); (iii) the width of the frontal reservoir. Figure 3 shows the cross-section of the analyzed configurations in the AAU14 tests.In the figure: B r is the reservoir width; B s is the emerged sloping plate width; ∆B rs is the horizontal distance between the crown wall and the crest of the ramp; h r is the depth reservoir; R c is the crest free-board of the crown wall; R r is the crest free-board of the front reservoir; ∆Rc is the vertical distance between the crown wall and the crest of the ramp; d w is the height of the sloping plate; d d is the height of the submerged sloping plate; h is the water depth at the toe of the structure.As regards the shape of the frontal ramp, two different configurations were tested: the first (defined as the flat configuration) was characterized by a flat ramp with a slope angle of 34 • ; the second (the curved configuration) has a curvilinear ramp where the slope angle varies linearly between 52 • and 17 • .The slope of the flat ramp was driven by Kofoed [25] where a maximization of the amount of overtopping was observed for such a geometry.The curved configuration was tested in order to verify for the on shore condition the results of Kofoed [26].In this study, performed for the offshore wave condition, such a geometry maximize the overtopping discharge. The height of the frontal ramp R r with respect to the s.w.l.influences the energy conversion.Indeed, a low values of R r produces high overtopping discharge in the front reservoir, but at the same time results in low hydraulic head on the turbine.The other way around, high values of R r produce small discharge in the reservoir and high hydraulic head.Typical values of R r are in the range of 1-3.5 m.Furthermore, the study of the response of the system to R r variation is relevant in order to evaluate the influence of tide. The width of the front reservoir ∆B rs can influence the overtopping discharge trough the overall structure.Typical values of ∆B rs are in the range of 5-15 m. The model geometrical characteristic are shown in Table 1 together with those of the AAU12 tests.A total of 9 cases were analyzed for each configuration.Hereafter, the symbols R1, R2 and R3 are used to indicate the three different values of R r . In order to reduce the overtopping discharge through the overall structure, a parapet has been placed on the top of the upper crown wall (Figure 4).Indeed, as indicated by Vicinanza et al. [8] the presence of a "nose" causes a strong reduction of the overtopping at the rear side of the model.The parapet had the shape of an isosceles triangle with vertical and horizontal sides of 2 cm in the model scale.Such dimensions were defined on the basis of the authors' experience.The rubble mound material was chosen in order to ensure the stone stability under wave action and to reproduce the main hydraulic behavior of the structure.The equivalent cube side length exceeding by 50% the stones (D n,50 ) for the armor layer was 50 mm.In order to reproduce the turbulent flow inside the filter layer, a D n,50 equal to 20 mm was assumed in such a layer.Finally, the D n,50 was equal to 5 mm for the core, in order to prevent the washing of the core material. Wave Characteristics Waves were generated from a hydraulically-driven piston mode generator controlled by the software AwaSys developed by Aalborg University.Simultaneously, active absorption of reflected waves was used in all tests [27].Waves were generated based on the three parameters in the JONSWAP (JOint North Sea Wave Project) spectrum: significant wave height H m0 , peak period (T p ) and the peak enhancement factor γ (γ = 3.3 in all tests).Each test contained at least 1000 waves. The data obtained from the eight wave gauges (four for the model) were analyzed with the software WaveLab (developed at Aalborg University).The software allowed one to estimate the wave characteristics (e.g., incident wave height, reflected wave height, energy wave period) with the method of Zelt and Skjelbreia [28].A total of 200 tests was carried out. Table 2 shows the maximum and minimum values of the water level and of the wave characteristics estimated at the models' toe. Table 2. Maximum and minimum values of the water level and wave characteristics at the models' toe evaluated trough the method of Zelt and Skjelbreia [28].Table 3 reports the range of some dimensionless parameters.L m−1,0 and ξ m−1,0 represent the wavelength and the breaker parameter, respectively, referenced to the spectral incident energy wave period T m−1,0 . Instruments The laboratory campaign was carried out in order to gather data on the following characteristics: wave reflection, wave loading, wave overtopping discharge both in the front reservoir and behind the whole structure. In each sub-flume, the following instruments were installed: 4 resistance wave gauges in order to evaluate incident and reflected wave spectra; 2 boxes to collect the water discharge in the front reservoir and behind the whole structure; 2 depth gauges, protected by hollow cylinders in PVC in order to measure the overtopping discharge; 14 pressure transducers for the estimation of the pressures/forces induced by the waves on the structure. The distance between the models and the wave gauges was approximately 2.80 m (Figure 2), according to the recommendations of Klopman and van der Meer [29]. In each overtopping accumulation box, a water level gauge was installed in order to measure overtopping discharge and to control evacuation pumps (Figure 5).The overtopping accumulation box was connected to the frontal reservoir by a PVC pipe, while the rear side-overtopping accumulation box received the discharge through a ramp placed on the top of the crown wall.The PVC pipe outflow was set at the same level of the reservoir bottom.Overtopping discharges at the rear side of the models and in the front reservoirs were estimated using the water level gauge measurements.Once the water level inside the box is known, it is possible to estimate the flow rate Q as: where ∆V is the overtopping volume variation in the ∆t, A(h) is the cross-sectional area of the box and h box is the water level.The area A(h) is a function of the water level due to the presence of the pipes and pump used to extract the water.A(h) was estimated by measuring the water depth in the box after a known water volume was introduced in the box. Results The overtopping discharge Q in through Section S 1 is the sum of three terms: where Q reservoir is the flow through Section S 2 , Q rear is the flow through Section S 3 and Q over f low is the reflected overflow outgoing from the reservoir (Figure 6).The water effectively collected in the reservoir, generating the Q turbine .In Figure 7, the non-dimensional time average wave overtopping per meter width: is plotted against the relative crest free-board: where q reservoir is the time average wave overtopping per meter width.The point data refer to tests with R r = 0.125 m.Two zones could be clearly identified: in Zone I, q * reservoir increases with the decreasing of R * r ; in Zone II, the q * reservoir decreases with the decreasing of R * r .The trend observed in Zone I is typical of an overtopping process.In this zone, Q rear and Q over f low are null, and Qin = Qreservoir = Qturbine (the overbar represent the time average values). Zone II identifies a range where the reservoir operates in a saturated condition.In such a condition, a large volume of water is lost (i.e., Q over f low increases).For this reason, q * reservoir decreases with the decreasing of R * r .In this zone, Qin > Qreservoir and Qreservoir = Qturbine .The overtopping rate observed in Zone II is strongly affected by test-specific conditions (e.g., reservoir dimensions, localized and distributed load losses, hole geometry).For this reason, the overtopping rate observed in Zone II will not be taken into account in the analysis.Therefore, the following sections describe the results obtained in Zone I. Overtopping Discharge in the Front Reservoir The wave overtopping in the front reservoir can be analyzed as in the case of structures with a single slope.For these types of structure, in the case of no breaking wave, the wave overtopping q can be expressed as by the functional relationship f : where R indicates the crest free-board of the structure and γ indicates a coefficient that takes into account all of the factors that reduce the wave overtopping.In EurOtop Manual [30], the average dimensionless overtopping discharge per meter structure width is estimated as: where: c 1 and c 2 are empirical coefficients; γ f is the influence factor for the permeability and roughness of the slope; γ β is the influence factor for the oblique wave attack; γ b is the influence factor for a berm.The coefficient c 1 is equal to 0.2, while c 2 is equal to 2.6 for the probabilistic method and 2.3 for the deterministic method.Equation ( 6) was extended by Victor and Troch [31] in the case of an impermeable slope.In particular, Victor and Troch [31] have estimated the values c 1 and c 2 as a function of the slope angle of the structure, the crest free-board and the significant wave height. The authors observed that the effect of the wave period can be neglected due to non-breaking waves on the slope. Based on the AAU12 tests, Vicinanza et al. [8] proposed a new formula to estimate the average dimensionless overtopping discharge.The formula was developed on the basis of the tests conducted for two configurations of OBREC.In particular, two different lengths of the ramp (d w ) were tested (Section 2.2 provides a description of the geometrical and wave characteristics of the AAU12 tests).The equation proposed by Vicinanza et al. [8] is the following: where the parameter s Rr is named the wave-structure steepness: The application range of Equation ( 7) is 0.64 < d w ∆R c < 1.35 and 0.0123 < s Rr < 0.202 [8]. Figure 8 shows the comparison between the measured non-dimensional average front reservoir overtopping discharge and the prediction methods: (a) Vicinanza et al. [8]; (b) EurOtop Manual [30].The prediction methods' reliability is expressed using the root-mean square error (rmse), defined by the following relationship: where N is the number of the test, q * Obs is the dimensionless overtopping discharge observed and q * Est is the dimensionless overtopping discharge estimated using the prediction method.The Vicinanza et al. [8] formula overestimates the observed values.Indeed, the rmse values are: 8.23 for R1, 7.90 for R2 and 7.71 for R3.The reason is straightforward and addressable by the significant difference in the model setup of the AAU12 and AAU14 tests (Figures 3 and 4; Table 1).Indeed, the AAU12 model was equipped with a toe berm, and the submerged ramp was absent (i.e., new configurations are out of the range of application of Equation ( 7)). The formula of EurOtop Manual [30] fits quite well the observed values using γ f = 1.The rmse values are: 0.47 for R1, 0.22 for R2 and 0.01 for R3.However, a more detailed analysis shows that the decrease of d d leads to a difference between the observed values and the values estimated using Equation ( 6).This behavior is due to the effect of the amour roughness.It was observed that the shorter the height of the submerged sloping plate (d d ) with respect to the wavelength, the larger the effect of the amour roughness.This aspect can be clarified observing Figure 9, which shows the ratio between the dimensionless measurement overtopping discharge and the one estimated by Equation ( 7) More in detail, the effect of the amour roughness is negligible for d d /L m−1,0 > 0.05, and Equation ( 7) with γ f = 1 predicts the observed overtopping discharge with enough accuracy (the spread is ±20%).For d d /L m−1,0 < 0.05, the effect of the amour roughness is very relevant, and Equation (7) with γ f = 1 overestimates the overtopping discharge. Bruce et al. [32] estimated γ f for various types of homogeneous armor units.For a composite structure, as OBREC, a constant value of γ f cannot be defined, but such a parameter has been evaluated according to the incident wave characteristics and to the length of submerged ramp.In order to evaluate the effect of the amour roughness for the analyzed configurations, γ f , which minimize the differences between the values of q reservoir observed and those estimated with the EurOtop Manual [30] relationship (Equation ( 6)), was estimated (see Figure 10), and a relationship between γ f and the relative ramp depth d d L m−1,0 was determined: where s 1 = 7.47 and s 2 = 0.42.The lower limit γ f = 0.7 was detected by analyzing the AAU12 tests, which were performed with d d close to 0 m.Indeed, for such tests, it was observed that Equation ( 6) can be applied adopting values of 0.7 for γ f [8]. Figure 10a shows also the comparison between the value of γ f estimated on the basis AAU14 tests and predicted values by Equation (10). Figure 10b shows the comparison between the q reservoir observed and q reservoir estimated with Equation (6) using the γ f computed from Equation (10).10); (b) comparison between the q reservoir observed and q reservoir estimated with Equation ( 6) by using the γ f computed from Equation 10.In (b) the red line indicates the perfect matching between the two quantities. Comparisons of q * reservoir between flat and curved configurations are presented in Figure 11.As can be seen from Table 1, the ramp crest free-board (R r ) of the flat configuration is 4 mm greater than the curved configuration.Hence, in order to make the comparisons more reliable, non-dimensional overtopping rates in Figure 11 are divided by the relative crest free-board.The comparison shows that the curved configuration gives approximately 22% less overtopping than the flat configuration.Overtopping into the front reservoir: q * reservoir /R * r observed in the presence of the flat configuration versus q * reservoir /R * r observed in the presence of the curved configuration. Reflection Several previous studies have been performed to analyze the reflection caused by coastal structures.Some of these studies have enabled the development of new prediction methods to estimate the reflection coefficient K r , i.e., the ratio between reflected (H m0,r ) and incident wave heights (H m0,i ). For rubble mound breakwater, the reflection coefficient can be expressed by the following functional relationship: where α is the structure slope angle and L m−1,0 is the deep water wavelength estimated with reference to T m−1,0 at the structure toe. Recently, Zanuttigh and van der Meer [33] have developed a new estimation formula for various types of structures.This formula allows one to estimate K r once known: incident wave characteristics (H m0 and T m−1,0 ) at the structure toe, the slope of the structure (α) and roughness coefficient γ f : where ξ m−1,0 is the breaking parameter and the coefficients a and b are defined by the following relationship: Vicinanza et al. [8] suggested the adoption of Equation ( 12) for the estimation of K r .This method, in fact, appears opportunely conservative.In more detail, the analysis showed a good agreement between the predicted and the observed K r adopting a γ f value equal to 0.55.Moreover, for the configuration in the AAU12 tests with greater q * reservoir , a reduction of K r was recognized.This behavior was explained by the fact that the device captures the incoming wave energy.Such an aspect was also observed for breakwaters by Zanuttigh and van der Meer [33], as the lower the crest, the greater the overtopping and the lower the reflection.In order to take into account such behavior, the authors introduce a reduction factor of the reflection coefficient: where R * is the relative crest free-board of the structure.Equation ( 14) was estimated for rock permeable slopes, and it can be applied to the following range: R * ≥ −1; H m0 /D n50 ≥ 1; and s m−1,0 ≥ 0.01. Figure 12 shows the comparison between the reflection coefficients (K r,obs ) measured in the AAU14 tests and those estimated with Equations ( 12) and ( 13) (K r,est ).A good correspondence can be found.In particular, the value of K r,est is estimated using three different values of γ f with the variation of the water level: 0.55 for R1; 0.8 for R2; and 0.9 for R3. As can be seen from Figure 12, an increasing of the submerged ramp length (d d ) causes an increase of reflection.This aspect is consistent with the observation of the the higher overtopping discharges due to the decrease of the armor roughness observed in the previous section. In order to take into account also the effect of the submerged ramp length, a new prediction method based on Equation ( 12) has been derived.Using the values of γ f estimated from Equation (10), the method of Zanuttigh and van der Meer [33] overestimates the experimental data.Such behavior is caused by the structural difference between the models used by Zanuttigh and van der Meer [33] and the OBREC.Therefore, to evaluate K r , a corrective coefficient was introduced: with γ f estimated by Equation ( 10) and c γ f evaluated through the following relationship: where s 1,r = 2.64 and s 2,r = 0.28, and X Kr is defined as: The parameters s 1,r and s 2,r were evaluated by the fitting process using the least squares method.Figure 13a shows the comparison between c γ f detected by experiments and the calculated values with Equation 16. Figure 13b shows the comparison between K r observed and K r estimated with Equations ( 12), ( 13) and (15).16; (b) comparison between K r observed and K r estimated with Equations ( 12), ( 13) and (15). Figure 14 shows the values of K r as a function of the breaking parameter (ξ m−1,0 ) classified depending on R r and on the ramp shape (flat and curved).The comparison shows that for large values of R r (cases R1 and R2), the difference between the flat configuration and curved configuration is negligible.Such a difference becomes relevant for the case with small values of R r , that is R3.This behavior is because: the reflection coefficient tends to increase with the slope.In fact, near the ramp crest, the curved configuration has a slope smaller than the flat configuration, causing an attenuation in reflection. Wave Overtopping at the Rear Side of the Structure Coastal engineers design breakwaters generally with the main aim to limit the overtopping discharge at the rear side of the crown wall. OBREC can be considered as a slope with a crown wall provided with a parapet.For this type of structure, there were no specific studies before the AAU12 tests.In order to address this shortcoming, Vicinanza et al. [8] developed a new prediction formula: where ∆R c is the difference between R c and R r .The range of application of Equation ( 18) is: 0.014 < ∆R c L m−1,0 < 0.038; 0.035 < s m−1,0 < 0.058; 1.24 < R c H m0 < 1.38 [8].Van Doorslaer et al. [34] analyzed several configurations to reduce the wave overtopping over the smooth dike slope.The authors extended the prediction method of EurOtop Manual [30] (Equation ( 6)) for structures very similar to OBREC.One of these structures is composed of a promenade and a storm wall, and another one is composed of a promenade, a storm wall and a parapet.The authors have modified the EurOtop Manual [30] formula as: where γ VD is defined as: for a smooth breakwater with a promenade and a storm wall, and as: for a smooth breakwater with a promenade, a storm wall and a parapet, where the coefficients γ v , γ par , γ prom take into account respectively the reduction effect of the storm wall, the reduction effect of the parapet and the reduction effect of the promenade.Methods for reduction factors' estimation are shown in Van Doorslaer et al. [34]. Figure 15 shows the comparison between the observed values and the Equation ( 18).The overtopping discharge for configuration ∆B rs = 0.30 m is estimated fairly well by Equation ( 18), while for configurations ∆B rs = 0.10 m and ∆B rs = 0.20 m, overtopping is underestimated.Such behavior is because of the fact that Equation (18) does not take into account the effects of the reservoir width.AAU14 measured discharges versus computed values by Equation (19) for two different configurations tested by Van Doorslaer et al. [34] are reported in Figure 16a,b.For the structure without a parapet, Equation (19) interprets quite well the experimental data for ∆B rs = 0.20 m.However, for ∆B rs = 0.10 m and ∆B rs = 0.30 m, the average overtopping discharges are respectively underestimated and overestimated.For the structure with a parapet (Figure 16a,b), the approach of Van Doorslaer et al. [34] tends to underestimate the measured values.The main reason for these discrepancies can be explained by the absence of the reservoir.Table 4 shows the performances of the two prediction methods.Based on the AAU14 tests, a new prediction method was developed.The average wave overtopping can be expressed by the following functional relationship: This expression can be arranged in terms of dimensionless parameters: On the basis of the AAU14 tests with the flat ramp, the overtopping discharge at the rear side of the breakwater could be evaluated by the following relation: where a rear = 0.0139 and b rear = −7.17,and the parameter X rear is defined by the following equation: The parameters a rear and b rear were evaluated by the fitting process using the least squares method. Figure 17 shows the comparison between the measured overtopping discharges and calculated from Equation (24).Experimental data are excellently interpreted by Equation ( 24) with a good correlation coefficient (R 2 > 0.96) and an rmse = 0.69. In order to make a comparison and to establish the reliability of Equation ( 24), AAU12 data are also reported in Figure 17.The new formula can also be applied to the curved configuration by adopting an appropriate correction factor.Indeed, the curved shape of the ramp causes less wave overtopping discharge than the flat configuration.A mean difference of approximately 20% was estimated.Therefore, a new correction factor γ q rear was introduced.Such a factor equals one for the flat configuration and 0.83 for the curved configuration.Equation ( 24) can be rewritten as: The comparison between the observed values (flat and curved configurations) and those estimated using Equation ( 26) is shown in Figure 18. Conclusions This paper presents a study of the innovative Overtopping BReakwater for wave Energy Conversion (OBREC).Moving on from the complementary tests by Vicinanza et al. [8], 2D model tests were carried out at Aalborg University.The main purpose of this new experimental campaign was to extend the knowledge on the device behavior, with particular interest in the influence of the shape and draft of the front ramp, as well as the reservoir width.This parametric study aims to bring completion to the previous one in which different ramp crest elevations were tested. Two different device configurations are analyzed: the first is characterized by a constant slope ramp; the second presents a curved ramp profile.For both configurations, tests with three different reservoir widths and three water levels were considered. Wave overtopping in the front reservoir could be predicted by the method of EurOtop Manual [30] with a relatively accurate estimation.However, due to the dual nature of OBREC, a high level of accuracy in overtopping discharge effectively usable for energy production is required.Thus, a new method to estimate the roughness factor as a function of the submerged ramp length and the wavelength was developed. As regards the reflection coefficient, the two tested configurations show the same behavior.However, the reflection coefficients measured in the previous test campaign were lower than the present ones.Such a behavior is mainly caused by the different extension of the submerged ramp.The prediction method of Zanuttigh and van der Meer [33] can be used to estimate wave reflection, although it is necessary to apply a correction factor as a function of the wave characteristics and some geometrical characteristics of the ramp. A reduction of approximately 20% in overtopping rates in the front reservoir and at the rear side of the structure was observed for the curved configuration.This could significantly affect the potential energy production of the system, but at the same time, a higher safety level at the rear side of the crown wall can be ensured in comparison to the flat configuration. A new prediction method to estimate overtopping discharge at the rear side of the structure was proposed, which was proven to perform remarkably well with respect to the present dataset.Based on the approach proposed by Vicinanza et al. [8], the new formula allows one to take into account the effects of the reservoir width. The analysis described in this paper represents a further step forward in the knowledge of the OBREC device.Future studies will mainly focus on the evaluation of the energy production performance and how it can be affected by the geometrical characteristics of the device (width and depth of the reservoir, height of sloping plate, etc.) and by the turbine characteristics. Acknowledgments: The work discussed here is a part of the National Operational Programme for the "Research and Competitiveness" 2007-2013 (NOP for R&C) founded Project PON04a3_00303 titled "DIMEMO-DIgaMarittima per l'Energia del Moto Ondoso" (Maritime Breakwater for Wave Energy Conversion), Project PON04a3_00303.The work also was partially supported by RITMARE (acronym of la Ricerca ITaliana per il MARE ) Flagship Project and by the project PON02_000153_2939551 "Development of innovative technologies for energy saving and environmental sustainability of shipyards and harbor areas" (SEAPORT, acronym of Sviluppo di tecnologie innovative per la Sostenibilità Energetica ed Ambientale di cantieri navali ed aree PORTuali) and by HYDRALAB PLUS project (Proposal Number 64110).The authors gratefully acknowledge the Italian Ministry of Education, University and Research for supporting this innovative research and the Second University of Naples and the University of Catania for encouraging the mobility of researchers.Important input from Thomas Lykke Andersen in the study design and data analyses is gratefully acknowledged.The authors would also like to thank the Aalborg University technicians, in particular Niels Drustrup, for their cheerfulness, tolerance and considerable assistance in the design evolution, model construction and testing.Assistance in the testing and supporting physical modeling from Vincenzo Ferrante (Second University of Naples) is also gratefully acknowledged. Author Contributions: Claudio Iuppa contributed to the work described in this paper by collaborating in carrying out the laboratory campaign and analyzing the acquired data.Pasquale Contestabile contributed to the work by designing and collaborating in carrying out the laboratory campaign and analyzing the acquired data.Luca Cavallaro contributed to the work by analyzing the acquired data.Enrico Foti contributed with knowledgeable discussion and suggestion.Diego Vicinanza gave the idea, supervised the data analyses and gave the final approval.All of the co-authors participated in writing the paper. Conflicts of Interest: The authors declare no conflict of interest.spectral moment of order 0 m −1 (m 2 • s) spectral moment of order −1 q * rear (-) non-dimensional overtopping discharge towards the rear of the traditional rubble mound breakwater crown wall or towards the rear OBREC crown wall q * reservoir (-) non-dimensional overtopping discharge into the reservoir q rear (m 3 • m −1 • s −1 ) average overtopping discharge towards the rear of the traditional rubble mound breakwater crown wall or towards the rear of the OBREC crown wall q reservoir [m 3 • m −1 crest free-board of crown wall, i.e., the vertical distance between the crest of the vertical walland the still water level R r [m] crest free-board of front reservoir, i.e., the vertical distance between the crest of the sloping plate and the still water level rmse [-] root mean square error s m−1,0 = 2πH m0 vertical distance between the crown wall and the crest of the ramp Figure 2 . Figure 2. Plant and cross-section of the wave flume.OBREC, Overtopping BReakwater for Energy Conversion. Figure 5 . Figure 5. Wave-by-wave system for flow discharge measurement. Figure 9 . Figure 9.Comparison between the dimensionless measurement overtopping discharge and the one estimated by Equation (7) using γ f = 1. Figure 10 . Figure 10.Overtopping discharge in the front reservoir: (a) comparison between γ f estimated on the basis of the AAU14 tests and predicted values by Equation (10); (b) comparison between the q reservoir observed and q reservoir estimated with Equation (6) by using the γ f computed from Equation10.In (b) the red line indicates the perfect matching between the two quantities. Figure 11 . Figure 11.Overtopping into the front reservoir: q * reservoir /R * r observed in the presence of the flat configuration versus q * reservoir /R * r observed in the presence of the curved configuration. Figure 12 . Figure 12.Comparison between the measured K r in the AUU14 test and those estimated by Equation (12). Figure 13 . Figure 13.Prediction formula: (a) comparison between c γ f detected by experiments and the calculated values with Equation 16; (b) comparison between K r observed and K r estimated with Equations (12), (13) and(15). Figure 14 . Figure 14.Reflection coefficient: comparison between the flat and the curved configurations. Figure 16 . Figure 16.Comparison between the observed values and Equation (19) (γ v = 0.78 ; γ par = 0.72): (a) the case of the structure without a parapet; (b) the case of the structure with a parapet. Figure 17 . Figure 17.Comparison between Equation (24) and the data observed in the AAU14 tests with a flat ramp and the AAU12 tests. Figure 18 . Figure 18.Overtopping at the rear of the structure: comparison between Equation (26) and the observed values during the AAU14 tests (flat and curved configurations). B r (m) reservoir width B s (m) emerged sloping plate width d d (m) height of the submerged sloping plate D n50 (m) equivalent cube side length exceed by 50% of the stones d w (m) height of sloping plate g (m • s −2 ) gravity acceleration h (m) depth at the toe of the structure h box (m) depth in the accumulation box H m0,r (m) reflected significant wave height at the toe of the structure H m0 (m) incident significant wave height at the toe of the structure h r (m) depth in the front reservoir K r (-) H m0,r H m0 reflection coefficient L m−1,0 (m) deep water wavelength referenced to T m−1,0 m 0 (m 2 ) wave steepness at the toe of the structure s Rr[-] non-dimensional wave-structure steepnessT m−1,0 = m −1 m 0 [s] spectral incidentenergy wave period at the toe of the structure T p [s] incident peak wave period α [ • ] slope angle of the structure γ [-] peak-enhancement factor γ β [-] reduction factor for oblique wave attack γ b [-] reduction factor for berm γ f [-] reduction factor for slope roughness γ v [-] reduction factor for the storm wall γ par [-] reduction factor for the parapet γ prom [-] reduction factor for the promenade ρ [kg • m −3 ] to T m−1,0 ∆B rs = B r − B s [m] horizontal distance between the crown wall and the crest of the ramp ∆R c = R c − R r [m] Table 1 . Geometrical characteristics of the different configuration tested.The values refer to the Aalborg University 2014 (AAU14) and AAU12 tests. ) • s −1 ] average overtopping discharge into the reservoir R [m] crest free-board of the structure R * c = R c H m0 [-] relative crest free-board of crown wall R * r = R r H m0 [-]relative crest free-board of front reservoir R c[m]
9,178.4
2016-11-25T00:00:00.000
[ "Engineering", "Environmental Science" ]
Task Scheduling Scheme Based on Cost Optimization in 5G/Hetnets C-RAN . With the increase of data traffic in global mobile network, data computing close to the edge is going more and more memorandum to deal with the resources limitations. This paper, addresses Cloud Radio Access Network (C-RAN) architecture and proposes to provide extra computing and storage resources in the edge in order to allow the offloading of a set of mobile users services from the remote cloud computing infrastructure to a cloud computing infrastructure deployed in the edge next to Remote Radio Heads (RRHs). This approach raises many challenges. One of the challenges is the scheduling strategy of the offloading. Therefore, the main contribution described in this paper is a novel cost based service scheduling (CBSS) mechanism which takes into account deployment cost, deadline and available resources in order to make offloading decisions more efficient and to increase user experience. The solution was implemented in a simulator to highlight the benefit of the approach compared to existing approach. Introduction The evolution toward global mobile networks is characterized by an exponential growth of traffic.It is estimated that the data traffic will grow at a compound annual growth rate of 47 percent from 2016 to 2021 [1].This growth is mainly due to the huge success of smart phones and tablet.Nowadays, smartphones and tablets are real computers capable to run a large variety of applications in all areas of backend: entertainment, health care, business, social networking, traveling, news… More and more applications are virtualized and running in the cloud overcoming the limited capacities of the end-user devices.However, this necessitates an end to end communication from the mobile terminal to the application or service deployed in the far end cloud computing infrastructure.With the concept of Mobile Cloud Computing (MCC), the idea is to deploy additional cloud computing resources to allow some parts of the applications/services to run locally and offload the communications from the backend towards this local cloud to save resources and increase end users experience. More precisely, cloud-based radio access network has been already proposed in the 5G to decouple the Base Band Units (BBUs) from remote radio heads (RRHs) and to move them into the cloud enabling a centralized processing and management.With this approach, traditional complicated base stations can be simplified to cost-effective and power-efficient radio units (RRHs) by centralizing the processing allowing the efficient management of large-scale small-cell systems.Centralized processing power enables indeed more advanced and efficient network coordination and management. On the other side, mobile data offloading to external extra resources (such as using wifi) is also an important and popular issue in the cellular network.This consists on offloading the data communication from the mobile network access to another wireless access network (wifi, femto, etc) using therefore additional resources.This offloading can also target the processing using alternative storage and processing capabilities close to the end users.Several state of art proposals exploit therefore cloud computing technology for this purpose [2]. Our work is related to this context.We propose a novel Cloud RAN heterogeneous architecture where we introduce an edge cloud: the Cloud-RRH.It consists on additional computational and storage resources added to High RRHs (macro-cells) close to mobile end users.Using this infrastructure, mobile users will be able to offload their applications/services from the far end cloud computing infrastructure close to them in Cloud-RRH.The technology to support this offloading is containers [3] that provides a higher level of abstraction in terms of virtualization and isolation compared to other virtualization techniques.Therefore, in order to fully profit from this architecture we need to efficiently schedule offloading requests among different containers.That's why we propose a cost based task scheduling scheme.Especially we focus on overload and migration costs.Moreover, load balancing between containers has been taken into consideration. This paper is organized as follows.After this introduction, section II describes the related works.In section III, we present a model of the system, the formulation of the problem, and the basic idea of the proposed solution.The following section IV presents a simulation of the system and the solution as well as initial results.Finally, section V concludes this paper. Related Work Scheduling user's computing tasks is a hot challenge in cloud computing environment.Optimal resource allocation or offloading request scheduling helps to guarantee application performance and to reduce operating costs.A set of existing works are discussed in this section. Authors in [4] have proposed a selective algorithm that uses standard deviation to decide between the two scheduling algorithms Min-Min and Max-Min in order to minimize the total execution time of tasks.In [5], the improved Max-Min algorithm is modified to define two new algorithms based on the average execution time.Unlike the Max-Min, the task with a just above average run time is selected and assigned to the resource that gives a minimum run time.The average run time is calculated using the arithmetic mean for independent tasks and the geometric mean for dependent tasks.The main objective is to reduce tasks makespan.Authors in [6] have proposed a task scheduling algorithm based on priority.They have defined three levels of priorities: the scheduling level which represents the objective to be achieved by the planner, the level of resources which represents the attributes available to achieve the desired goal and the level of tasks which represents the available alternatives among which the best task should be scheduled first.Therefore, each task will require resources with a given priority and the priorities of the different tasks are compared with each other in order to be scheduled.In [7], authors have proposed a task scheduling algorithm based on credits.The proposed approach is based on two parameters: the priority of the user and the duration of the task.A credit is assigned to each task according to its duration and priority.The task with the highest credit value is executed first.In [8], an optimized algorithm for task scheduling based PSO (Particle Swarm Optimization) is proposed.PSO is a population-based search algorithm inspired by bird flocking and fish schooling, where each particle learns from its neighbors and itself during the time it travels in space.However, like any other metaheuristic method, this algorithm does not give any guarantees on finding the most optimal solution.Consequently, whenever the search space expands, the chance of finding an optimal solution becomes harder and harder.Authors in [9] have proposed a cost-deadline based task scheduling algorithm (CBD).The cost is calculated according to the task length, deadline and the number of processing elements required.Then, a sorting mechanism is used to decide the order of execution of tasks.Their mapping with virtual machines is given by the Min-Min heuristic algorithm.Therefore, the proposed approach is used to minimize missed deadlines.Authors in [10] have investigated cost based scheduling using linear programming.They have proposed a task scheduling algorithm based on delay bound constraint (SAH-DB) in order to improve the task execution concurrency: when a task is received all the resources (CPU, memory and network) are sorted in descending order based on the resources processing capacity, then the task is dispatched to resources with the minimum execution time. In this paper, we propose a novel C-RAN architecture and corresponding resource management mechanism, where a Cloud-RRH is introduced in the edge of the mobile network.While most previous works have focused on jobs' completion time, we propose in this work a scheduling optimization mechanism that aims to reduce the cost of tasks scheduling.Unlike previous works, we model the cost of tasks as function of overloading and migration.The scheduling process takes mainly into account the available resources, resource requirements, deadlines and load balancing in Cloud-RRH. OFFLOADING SCHEDULING MECHANISM PROPOSAL In this section, we will discuss the considered scenario and problem statement before presenting our system model and formulate the optimization problem for the offloading requests scheduling. 3.1 Scenario and problem statement: The scenario is depicted in figure 1.We consider a C-RAN heterogeneous architecture composed of H-RRHs (High RRHs) which acts as macro cells and L-RRHs (Low RRHs) which acts as small cells.In our scenario, we introduce the Cloud-RRH which represents cloud capacity in the edge network.While in a traditional C-RAN architecture all the RAN functionalities are centralized in BBU pools, we propose to flexibly split of these functionalities between edge and central cloud.We suppose also that additional computation and storage resources are available in the Cloud-RRH for computation offloading.These resources are represented by cloud containers. Fig. 1. Proposed C-RAN architecture We propose to use cloud containers instead of VM because of performance gain.Indeed VM are usually larger than containers since they include the whole operation system and their startup is much slower than containers.A container is essentially a packaged self-contained, ready-to-deploy set of parts of applications, that might even include middleware and business logic in the form of binaries and libraries to run the applications [11], see figure 2. Containers are characterized by: (i) a lightweight portable runtime, (ii) the capability to develop, test and deploy applications to a large number of servers and (iii) the capability to interconnect them. Fig. 2. VM vs Container Virtualization Architecture In current data centers the control of virtual machines (VM) requires a Virtual Infrastructure Manager (VIM), which is the entity in charge of VM lifecycle management. In our approach and as part of the cloud management, we propose to add a new functional entity called Cloudlet Manager (CM).The main functionalities of the CM are the following: Mobile users can access their services directly in the edge cloud.The CM could instantiate containers in the edge and offload (part of) the service logic computation in these containers.Containers are not always active, rather they are activated or deactivated accordingly.Different interactions schema are represented in figure 3. Mobile users' application tasks can be offloaded in the Cloud-RRH to achieve better performances.The cloudlet manager is responsible to decide in which container application tasks will be executed.A container is characterized by a triplet of allocated resources (CPU, RAM, and Network Bandwidth).Each offloading request is considered as a set of tasks to instantiate in the Cloud-RRH.Each task has a delay constraint and resource requirements in terms of CPU, RAM and Network Bandwidth. However, it is necessary to well design the scheduler of tasks and the offloading decision based on the available resources and the concurrent requests.The research questions that we are trying to respond are the following: 1. How to find the most suitable container for application tasks offloading that minimizes the total cost, comprising overloading cost and migration cost? 2. How to schedule offloading requests while respecting load balancing between containers in Cloud-RRH? System model: We assume that each Cloud-RRH infrastructure is able to run N predefined containers.Each container is characterized by its available capacity resources CPUi, RAMi and Neti, N i ∈ .An offloading request is specified as a set of M tasks to execute with a deadline D. Each task is characterized by its CPUj, RAMj and Netj requirements, and has an expected execution time j Tex , M j ∈ (time execution if all resources are sat- isfied).We consider a binary variable We associate to each pair container-task allocation a cost C which value depends on whether the container is overloaded after the execution of the task or not and also whether a task migration was necessary due to user mobility.In this work we did not consider the energy consumption cost.The details of the considered costs are presented in the following: Overload cost. Let us denote by i cap C _ the computational capacity of container i at time t: the average resource utilization of task j on container i: The utilization rate i µ of container i corresponding to the actual system configuration is given by the following formulation: . When a task j is allocated to an overloaded container, we associate a penalty which we also assume to be positively proportional to the level of overloading.We define the overload cost i t ov cos _ a metric as follows: ( ) Indeed, λ allows to accentuate the overload cost when approaching the saturation.Therefore, the closer we go to the maximum capacity and the more the cost will increase and the choice will go for another container in order to avoid saturation. The overall overload cost for the Cloud-RRH system to execute all the tasks can be calculated as follows: Migration cost. When a mobile user is moving from one cell to another one, the corresponding tasks may be migrated.We associate a penalty j r when a user task j is migrated, from one container to another one, to capture the service downtime incurred by the migration.The overall migration cost is defined as: In this paper, we only consider a migration of the tasks in the same Cloud-RRH and that migration penalty only depends on the type of task.(Migration in the whole network will be considered in future works) Intuitively the two variables, overload cost and migration cost, are correlated.For example, if we completely optimize the overload cost, task will be distributed over all available containers which will increase the migration cost.Therefore, we need to get a trade-off between the two variables. Optimization model. Therefore, the goal of the scheduler is to minimize the total cost of overloading and migration in the entire system when executing all the submitted requests.We considered two parameters α and β that represents the importance of weight given to each cost. Objective function The optimization is subject to constraints given by ( 8) through (11).Constraint (8) guarantees that each offloading request is executed before the application's deadline.Constraint (9) enforces that all tasks' requirements including number of CPU, amount of memory and network bandwidth are lower than container resources.Constraint (10) guarantees load balancing between containers in the same Cloud-RRH where ε denotes for the maximum tolerance of load balancing.Finally, constraint (11) ensures that each task is scheduled on only one container. First we set α = β = 0.5 which means that equal weight is given for the different types of resources.We also consider that all tasks are executed in parallel and the deadline D constraint is therefore fixed for the worst case when all tasks are executed in serial.This problem is a MIP and can therefore be solved as a linear program since the objective function is linear to all variables. SIMULATION AND RESULTS In order to evaluate the scheduling performance on tasks' execution cost for the proposed cost based scheduling scheme (CBSS), we have compared its results with SAH-DB scheduling mechanism.SAH-DB is a task scheduling algorithm based linear programming.It aims to schedule tasks while reducing the total execution cost within the user-expected delay bound.When a task t is utilizing a resource k, the execution cost is expressed as the cost of the resource k executing the task t. We considered a Cloud-RRH with N= {25, 50, 75, 100} containers having heterogeneous resources.The computing capacity of containers varies from 1 to 10 CPUs.The memory is set from 128 Mbytes to 512 Mbytes and the network bandwidth is set from 100 Kbps to 200 Kbps.The number of tasks is set as M= {20, 40, 60, 80, 100, 120, 140}.Tasks have heterogeneous requirements: CPU varies from 1 to 4, memory is between 128 and 1024 Kbytes and network bandwidth is varying between 1 and 20 Kbps.Offloading requests are embedded sequentially and their requirements are generated randomly.Simulation parameters are summarized in Table I.As we have mentioned before, we set α= β = 0.5 and λ=2.We have used IBM's linear programming solver CPLEX [12], and solved the problem with multiple data inputs.We have evaluated the scheduling efficiency in terms of execution cost under a varying number of associated tasks.Figure 4 represents the execution cost and its standard deviation by applying the proposed cost based scheduling scheme and SAH-DB scheduling algorithm with 25 to 100 cloud containers respectively.The proposed scheduling algorithm can reduce total execution cost compared with SAH-DB algorithm in the different number of associated tasks.Moreover, the total scheduling cost decreases with the increase of the number of resources and increases with the number of associated tasks.Therefore, the more resources are available the more the scheduling process is efficient. CONCLUSION AND PERSPECTIVES This paper proposes a cost based scheduling scheme (CBSS) that aims to minimize scheduling cost while considering available resources, resource requirements, deadline and load balancing in Cloud-RRH.We consider a scenario where users can offload tasks to Cloud-RRH.We focus on scheduling tasks that requests several resources such as CPU, memory and disk.We formulate the problem as cost optimization problem which takes into account user performance in terms of system overload and migration cost.Simulation results show that the proposed scheme is able to schedule offloading requests while minimizing the total execution cost. As future works, we will try consider mobility between different Cloud-RRHs while scheduling offloading request. .Furthermore, we will try to better investigate and evaluate the network performances by handling the interference and mobility management in C-RAN. Fig. 4 . Fig. 4. The execution cost with different resources Table 1 . PARAMETERS SETTING
3,915.8
2017-06-21T00:00:00.000
[ "Computer Science" ]
Parallel laser micromachining based on diffractive optical elements with dispersion compensated femtosecond pulses We experimentally demonstrate multi-beam high spatial resolution laser micromachining with femtosecond pulses. The effects of chromatic aberrations as well as pulse stretching on the material processed due to diffraction were significantly mitigated by using a suited dispersion compensated module (DCM). This permits to increase the area of processing in a factor 3 in comparison with a conventional setup. Specifically, 52 blind holes have been drilled simultaneously onto a stainless steel sample with a 30 fs laser pulse in a parallel processing configuration. ©2013 Optical Society of America OCIS codes: (260.1960) Diffraction theory; (320.2250) Femtosecond phenomena; (220.4000) Microstructure fabrication; (350.3850) Materials processing. References and links 1. J. Cheng, C.Liu, S. Shang, D. Liu, W. Perrie, G. Dearden, and K. Watkins, “A review of ultrafast laser materials micromachining,” Opt. Laser Technol. 46, 88–102 (2013). 2. R. Stoian, A. Rosenfeld, D. Ashkenasi, I. V. Hertel, N. M. Bulgakova, and E. E. B. Campbell, “Surface charging and impulsive ion ejection during ultrashort pulsed laser ablation,” Phys. Rev. Lett. 88(9), 097603 (2002). 3. N. M. Bulgakova, R. Stoian, A. Rosenfeld, I. V. Hertel, W. Marine, and E. E. B. Campbell, “A general continuum approach to describe fast electronic transport in pulsed laser irradiated materials: The problem of Coulomb explosion,” Appl. Phys., A Mater. Sci. Process. 81(2), 345–356 (2005). 4. J. Kato, N. Takeyasu, Y. Adachi, H. Sun, and S. Kawata, “Multiple-spot parallel processing for laser micronanofabrication,” Appl. Phys. Lett. 86(4), 044102 (2005). 5. S. Matsuo, S. Juodkazis, and H. Misawa, “Multiple-spot parallel processing for laser micronanofabrication,” Appl. Phys., A. 80, 683–685 (2004). 6. P. S. Salter and M. J. Booth, “Addressable microlens array for parallel laser microfabrication,” Opt. Lett. 36(12), 2302–2304 (2011). 7. D. Kim and P. T. C. So, “High-throughput three-dimensional lithographic microfabrication,” Opt. Lett. 35(10), 1602–1604 (2010). 8. D. N. Vitek, D. E. Adams, A. Johnson, P. S. Tsai, S. Backus, C. G. Durfee, D. Kleinfeld, and J. A. Squier, “Temporally focused femtosecond laser pulses for low numerical aperture micromachining through optically transparent materials,” Opt. Express 18(17), 18086–18094 (2010). 9. S. Shoji and S. Kawata, “Photofabrication of three-dimensional photonic crystals by multibeam laser interference into a photopolymerizable resin,” Appl. Phys. Lett. 76(19), 2668–2670 (2000). 10. T. Kondo, S. Matsuo, S. Juodkazis, and H. Misawa, “Femtosecond laser interference technique with diffractive beam splitter for fabrication of three-dimensional photonic crystals,” Appl. Phys. Lett. 79(6), 725–727 (2001). 11. T. Kondo, S. Matsuo, S. Juodkazis, V. Mizeikis, and H. Misawa, “Multiphoton fabrication of periodic structures by multibeam interference of femtosecond pulses,” Appl. Phys. Lett. 82(17), 2758–2760 (2003). 12. Y. Kuroiwa, N. Takeshima, Y. Narita, S. Tanaka, and K. Hirao, “Arbitrary micropatterning method in femtosecond laser microprocessing using diffractive optical elements,” Opt. Express 12(9), 1908–1915 (2004). 13. Y. Hayasaki, T. Sugimoto, A. Takita, and N. Nishida, “Variable holographic femtosecond laser processing by use of a spatial light modulator,” Appl. Phys. Lett. 87(3), 031101 (2005). 14. S. Hasegawa and Y. Hayasaki, “Adaptive optimization of a hologram in holographic femtosecond laser processing system,” Opt. Lett. 34(1), 22–24 (2009). 15. Z. Kuang, W. Perrie, J. Leach, M. Sharp, S. P. Edwardson, M. Padgett, G. Dearden, and K. G. Watkins, “High throughput diffractive multi-beam femtosecond laser processing using a spatial light modulator,” Appl. Surf. Sci. 255(5), 2284–2289 (2008). #198477 $15.00 USD Received 27 Sep 2013; revised 22 Nov 2013; accepted 27 Nov 2013; published 16 Dec 2013 (C) 2013 OSA 30 December 2013 | Vol. 21, No. 26 | DOI:10.1364/OE.21.031830 | OPTICS EXPRESS 31830 16. Z. Kuang, D. Liu, W. Perrie, S. Edwardson, M. Sharp, E. Fearon, G. Dearden, and K. Watkins, “Fast parallel diffractive multi-beam femtosecond laser surface micro-structuring,” Appl. Surf. Sci. 255(13-14), 6582–6588 (2009). 17. A. Jesacher and M. J. Booth, “Parallel direct laser writing in three dimensions with spatially dependent aberration correction,” Opt. Express 18(20), 21090–21099 (2010). 18. J. Cugat, A. Ruiz de la Cruz, R. Solé, A. Ferrer, J. J. Carvajal, X. Mateos, J. Massons, J. Solís, G. Lifante, F. Díaz, and M. Aguilgó, “Femtosecond-Laser Microstructuring of Ribs on Active (Yb,Nb): RTP/RTP Planar Waveguides,” J. Lightwave Technol. 31(3), 385–390 (2013). 19. J. Amako, K. Nagasaka, and N. Kazuhiro, “Chromatic-distortion compensation in splitting and focusing of femtosecond pulses by use of a pair of diffractive optical elements,” Opt. Lett. 27(11), 969–971 (2002). 20. B. C. Stuart, M. D. Feit, S. Herman, A. M. Rubenchik, B. W. Shore, and M. D. Perry, “Nanosecond-tofemtosecond laser-induced breakdown in dielectrics,” Phys. Rev. B Condens. Matter 53(4), 1749–1761 (1996). 21. L. Englert, M. Wollenhaupt, L. Haag, C. Sarpe-Tudoran, B. Rethfeld, and T. Baumert, “Material processing of dielectrics with temporally asymmetric shaped femtosecond laser pulses on the nanometer scale,” Appl. Phys., A Mater. Sci. Process. 92(4), 749–753 (2008). 22. E. L. Papadopoulou, E. Axente, E. Magoulakis, C. Fotakis, and P. A. Loukakos, “Laser induced forward transfer of metal oxides using femtosecond double pulses,” Appl. Surf. Sci. 257(2), 508–511 (2010). 23. J. R. Vázquez de Aldana, C. Méndez, and L. Roso, “Saturation of ablation channels micro-machined in fused silica with many femtosecond laser pulses,” Opt. Express 14(3), 1329–1338 (2006). 24. J. Lancis, G. Mínguez-Vega, E. Tajahuerce, V. Climent, P. Andrés, and J. Caraquitena, “Chromatic compensation of broadband light diffraction: ABCD-matrix approach,” J. Opt. Soc. Am. A 21(10), 1875–1885 (2004). 25. G. Mínguez-Vega, J. Lancis, J. Caraquitena, V. Torres-Company, and P. Andrés, “High spatiotemporal resolution in multifocal processing with femtosecond laser pulses,” Opt. Lett. 31(17), 2631–2633 (2006). 26. G. Mínguez-Vega, E. Tajahuerce, M. Fernández-Alonso, V. Climent, J. Lancis, J. Caraquitena, and P. Andrés, “Dispersion-compensated beam-splitting of femtosecond light pulses: Wave optics analysis,” Opt. Express 15(2), 278–288 (2007). 27. R. Martínez-Cuenca, O. Mendoza-Yero, B. Alonso, Í. J. Sola, G. Mínguez-Vega, and J. Lancis, “Multibeam second-harmonic generation by spatiotemporal shaping of femtosecond pulses,” Opt. Lett. 37(5), 957–959 (2012). 28. B. Alonso, I. J. Sola, O. Varela, J. Hernández-Toro, C. Méndez, J. San Román, A. Zaïr, and L. Roso, “Spatiotemporal amplitude and phase reconstruction by Fourier-transform of interference spectra of highcomplex-beams,” J. Opt. Soc. Am. B 27(5), 933–940 (2010). 29. Ll. Martínez-León, P. Clemente, E. Tajahuerce, G. Mínguez-Vega, O. Mendoza-Yero, M. Fernández-Alonso, J. Lancis, V. Climent, and P. Andrés, “Spatial-chirp compensation in dynamical holograms reconstructed with ultrafast lasers,” Appl. Phys. Lett. 94(1), 011104 (2009). 30. J. Lancis, E. Tajahuerce, P. Andrés, V. Climent, and E. Tepichín, “Single-zone-plate achromatic fresneltransform setup: Pattern tunability,” Opt. Commun. 136(3-4), 297–305 (1997). Introduction High precision micro-and nano-structuring of materials with femtosecond laser pulses can only be accomplished under a detailed control of the ablation mechanism.This mechanism can be affected by many factors which include, but are not limited to: material properties i.e., electronic band structures, and laser parameters such as fluence or pulse duration [1].Hence, the selection of dielectrics, semiconductors or metals for performing ultrafast laser processing strongly determines the nature of the ablation mechanism.For instance, meanwhile laser ablation of dielectrics mainly should be attributed to a multiphoton surface ionization process (Coulomb Explosion), for metals under laser fluence range lower than 1 J/cm 2 the dominate ablation mechanisms seem to be spallation and fragmentation [2,3].Under low laser fluence regime, there also occur minimal thermal or mechanical damages in the surrounding of the processed area.This attractive physical phenomenon, together with several well-established techniques for the generation of user-defined irradiance patterns makes femtosecond laser processing a very promising tool for industrial applications.In particular, parallel processing by means of complex irradiance patterns allows overcoming extensive attenuation required for current regenerative or multipass amplifier systems, providing pulse energies in the range of the mJ.Furthermore, it reduces the long fabrication time characteristic of sequentially dotby-dot scans over a sample.In this context, optical techniques for parallel processing have been used, among other tasks, to generate desired focal patterns with the help of microlens arrays [4][5][6], get temporal focusing of pulsed beams [7,8], achieve multibeam interference of femtosecond beams [9][10][11] or carry out holographic patterning for material micro-structuring by using diffractive optical elements (DOEs) [12][13][14][15][16][17].In addition, holographic femtosecond laser processing assisted by a spatial light modulator (SLM) can be also regarded as a dynamic method for arbitrary irradiance patterning.Note that micro-structured surfaces could be useful i.e., for the fabrication of microfluidic devices [17] or integrated photonics [18]. However, the use of ultrashort pulses for material processing implies to deal with both chromatic aberrations due to the strong dependence of the diffraction phenomenon on the wavelength, and temporal stretching originated by several items i.e., material dispersion or propagation time difference in free space.The former effect reduces the peak intensity and increases the pulse width at the focal point owing to wavelength dispersion.As a matter of fact, it was observed that the eccentricity of holes performed with a DOE at a frequency of 25 lp/mm was increased in a factor of two for a laser of 160 fs [15].Therefore, the bandwidth of the light source sets an upper limit over the useful processing area to be free from spatial distortion effects.Although several efforts have been conducted to compensate for the spatial distortion, less special care has been paid to keep the temporal width of the incident pulse unchanged over the processing area [12,17,19].Amako et al. demonstrate that a pair of DOEs conduces to a correction of the transversal chromatic aberration, but produces an increase of the pulse duration [19].Kuroiwa et al. show that the large chromatic dispersion effects induced by DOEs can be reduced when the focalization of the pulse is done with a refractive lens, instead of being included in the DOEs design [12].Finally, Jesacher et al. propose a way to fabricate 3D structures with high spatial quality by doing machining at different depth of a crystal, but close to the optical axis to avoid chromatic aberration [17]. On the other hand, for non thermal micromachining long pulses can be used (up to the range of about 10 ps depending on the material) [20].However, optimal energy coupling with the help of suitably shaped temporal pulses gives us the possibility to guide the ablation towards user-defined directions, offering extended flexibility for high-quality material processing.Some experiments have demonstrated the influence of femtosecond shaped pulses in material processing.Just to mention a few cases, asymmetric shaped pulses with thirdorder dispersion can produce holes smaller than the diffraction limit in dielectrics [21]; with doubled pulses delayed a specific time it is possible to control the size of the transferred spots done in metal oxides [22], and the shape of ablation channels in fused silica can be modified by changing the pulse duration [23]. In this manuscript, we experimentally show that in femtosecond micromachining both chromatic dispersion [24] and pulse stretching can be compensated to a first order with a proper designed DCM.The proposed DCM is made up of a hybrid diffractive-refractive lens triplet [25][26][27], which allows for wide-field holographic microprocessing under 30 fs pulses with high spatial resolution. Preliminary example The impact of chromatic aberration and temporal pulse elongation in holographic laser processing can be illustrated by the following example.Let us consider that a diffraction grating of period 0 p is used to generate multifocal spots under pulsed light illumination of central wavelength 0 λ .We assume for simplicity that the amplitude of the laser pulse is described by a one-dimensional electric field in the form ( ) After passing through the diffraction grating the light is focused with a refractive lens of focal length f.In the focal plane of the refractive lens a set diffraction orders (denoted by n) appears, generating a spatially distributed multifocal pattern.The rms widths of the spatial σ' x and temporal σ' t irradiance profiles of the elongated focal spots can be roughly estimated by the expressions [26] respectively, where σ 0 is the rms width of the irradiance profile corresponding to the zero diffraction order which can be calculated by the equation 0 0 4 x f σ λ πσ = , and c is the speed of light.Note that, the value of σ 0 remains unchanged after removing the diffraction grating.From Eq. ( 1) is apparent that the higher the diffraction order n, the longer both the spatial elongation in the transversal direction x, and the pulse width. To have an idea of the spatiotemporal elongation we choose a set of typical parameters f = 200mm, λ 0 = 800nm, p 0 = 40.5μm(or 24.7 lp/mm), σ λ = 18.7nm, and σ x = 0.75mm.For the unaffected zero diffraction order (n = 0) with σ 0 = 17μm, the pulse duration is kept constant σ' t = σ t = 9.1fs (without taking into account the material dispersion).In contrast, for the first diffraction order (n = 1), σ' x = 94μm, and σ' t = 50fs.Hence, the ratios σ' x /σ 0 and σ' t /σ t for the first diffraction order can be roughly estimated in a factor of 5.Such realistic values make the above optical setup unsuitable for applications in holographic parallel microprocessing with broadband pulses. Dispersion compensated module In order to significantly mitigate the effects of spatial and temporal broadening shown in the previous example, a suited DCM is used.In particular, we choose a hybrid refractivediffractive DCM that was theoretically introduced in [25].In Fig. 1 the optical setup is shown.Here, it should be mentioned that a diffractive lens (DL) can be regarded as an optical element that focuses light by diffraction, following an inverse dependence with the wavelength of light.The focal lengths of DL 1 and DL 2 for λ 0 are denoted by f 1 and f 2 , respectively.Initially, the system acts as a Fourier transformer for the central wavelength, e.g., the field at the output plane is the Fourier transform of the field in the DOE plane for λ 0 .Then, we force the design to ensure that the irradiance pattern corresponding to each wavelength coalesces in a single one for all the spectral components of the pulse.In addition, the design also satisfies the condition that every ray impinging onto the DOE must have an identical arrival time at the back focal plane.Although exact compensation is not possible, we impose a first-order correction that leads to the geometrical constraints [25], l = f, d 2 = -f 1 f 2 and d' = -d 2 /(d + 2f 1 ).These conditions guarantee that in the spatial domain (only for very high spatial frequencies) a residual low spatial elongation is observed, whereas in the time domain some radial group velocity dispersion effects appear [26,27]. Experiment The experimental setup has mainly three parts, the laser system, the beam delivery block, and the micromachining zone.The light source is given by a mode-locked Ti: sapphire laser (Femtosource, Femtolaser).Following the specification of the manufacturer this laser emits slightly chirped pulses with a full width half maximum of 30 fs (σ t ≈13.3fs).The spectral rms bandwidth measured in our laboratory is approximately σ λ = 18.7nm which corresponds to a transform limited pulse of σ t = 9.1fs.To reduce the beam size until σ x = 0.75mm, an iris is introduced in the beam path.The laser emits pulses with a maximum energy of 0.8 mJ at 1 kHz repetition rate.Before exit the ultrashort pulses pass through a user-adjustable postcompression stage based on fused silica Brewster prisms.Hence, one can introduce negative dispersion to later compensate the positive material dispersion in the beam delivery path. Initially, the pulse impinges over a DOE (Edmund Optics) designed to provide an array of 8x8 spots in the Fourier plane.This DOE has a design wavelength of 635 nm.The DCM is composed of an achromatic lens L (Thorlabs AC254-200-B), with focal length f = 200mm, coupled to the diffractive lens pair, DL 1 and DL 2 .The focal of the DLs for λ 0 are f 1 = −150mm and f 2 = 150mm, respectively.These DLs were fabricated by a photolithography process, achieving four phase steps and a diffraction efficiency of 80% for λ 0 .In particular, they were built over a silicon (Quarglas Substrate) 1mm thickness and 50mm diameter by mask lithography over the positive photoresist (A 3120).The corresponding axial distances in the DCM are l = 200mm, d = 150mm, and d' = 150mm. In the micromachining zone, the light from the intermediate plane was directed to the laser processing optics, composed of the refractive lens with a focal length of 100mm, and a 20X microscope objective of focal length 10mm, working in a telescope configuration.Between the refractive lens and the microscope objective we placed a shutter to control the number of pulses that arrive to a stainless steel sample.Specifically, this sample was austenitic, Cr-Ni stainless steel type X5CrNi18-10 contained in 3mm thin metal sheet.The sample was mounted on a XYZ translation stage.In these conditions, the sample was irradiated by 500 pulses, forming microstructures onto its surface. Results and discursion To compare the optical features of the setup shown in Fig. 1 with those of a setup without DCM the DLs pair is removed and the achromatic lens L is displaced.After that, the back focal plane of the lens L was located at the intermediate plane.To determine the energy over the sample with and without the DCM, the irradiance distribution of the focused beam was recorded by using a CCD camera.In order to achieve the same energy for both setups at the zero diffraction order, the input average power was adjusted to 30 mW and 45 mW for the setups with and without DCM, respectively. The irradiance profiles corresponding to different diffraction orders at the intermediate plane with and without DCM are shown in Fig. 2(a).When the pulse is focused with only the refractive lens (top images), the spatial broadening e.g., for the frequency 11.1 lp/mm is around σ' x = 52μm.This result is in fairly good agreement with the theoretical value predicted by Eq. ( 1) which is σ' x = 45μm.A similar result is obtained for the frequency 24.7 lp/mm (experiment σ' x = 92μm, and theory σ' x = 94μm).However, if the DCM is used to focus the light (bottom images), one can observe that focal spots have almost the same shape with a rms width of approximately σ' x ~24μm. On the other hand, Fig. 2(b) shows the instantaneous intensity for the central point of each diffraction order measured with a recently reported technique [28].It is based on the spatiotemporal amplitude and phase reconstruction by Fourier transform of the interference spectra of the optical beams (STARFISH).When the pulse is focused with only the refractive lens (top images), the spatial broadening e.g., for the frequency 11.1 lp/mm is around σ' t = 28.3fs.This result is in good agreement with the theoretical value predicted by Eq. (1), which is σ' t = 27.7fs.A similar result is obtained for the frequency 24.7 lp/mm (experiment σ' t = 51.2fs, and theory σ' t = 52.7fs).However, if the DCM is used to focus the light (bottom images), one can observe that the temporal duration is almost the same that the corresponding one at the output of the laser, which is σ' t ~13.5fs.To see the processed focal spots in a wide spatial field we used an optical microscope.Details of the wide field processed material are shown in Fig. 3(a) and 3(b).In the experiment, 52 blind holes were ablated when the DCM is used (please note that some spots of the DOE were out of the pupil of the microscope objective).This number was reduced to 16 when a conventional setup is employed.Owing to the dispersion effects, in the latter case only focal spots corresponding to lower spatial frequencies had enough fluence to make micromachining in the material surface.In our setup, the highest spatial frequency marked on the metal surface without the DCM was 33 lp/mm, while with the DCM we achieved ablation at 50 lp/mm what implies an increase of more than 3 times in the ablation area.Spatial features of the ablated material with micrometric resolution were also observed with the help of SEM images, as shown in Fig. 4. In particular, drilled holes obtained for the same spatial frequencies with and without the DCM were compared.From Fig. 4 is it clear that the shape of the processed spots affected by chromatic aberrations departs to a great extent from the ideal circular form.This fact prevents from the use of certain DOEs for micromachining under ultrashort pulsed illumination.However, when the DCM was used the resulted spots retrieved the desired circular shape. Conclusions We have shown that correction of the chromatic aberration induced by DOEs is mandatory to carry out parallel laser micromachining under ultrashort pulsed illumination.To do that, the integration of a suited diffractive-refractive DCM into the processing line was proposed, and further experimentally validated giving excellent results.The proved ability of the DCM to perform not only spatial but also temporal shaping of ultrashort pulses can be more suited in semiconductors or dielectrics laser microprocessing.In fact, its application to surface nanostructures and/or dielectrics submicro-processing is highly encourage due to the greater sensitivity of these experiments to pulse duration.Note that, the use of spatial light modulator instead of static DOE introduces an additional degree of freedom to the proposed optical setup, allowing for a dynamical processing of the material surface [29].Meanwhile, in the Fresnel regime the applicability of dispersion compensation modules is a subject of continuing studies [30]. where x σ and t σ denote the root-mean-square (rms) width of its spatial and temporal irradiance profiles, respectively.A transform limited pulse with a rms of t Fig. 1 . Fig. 1.Schematic of the diffractive-refractive optical system used to improve multifocal micromachining. Fig. 2 . Fig. 2. Measurements in the intermediate focal plane: a) Details of the irradiance profile without DCM (top), and with DCM (bottom).b) Instantaneous intensity for the central point of each diffraction order without DCM (top), and with DCM (bottom). Fig. 3 . Fig. 3. Details of a region of the surface of the ablated sample observed with an optical microscope (a) with a conventional setup (b) with the DCM. Fig. 4 . Fig. 4. SEM images of the holes corresponding to different diffraction orders without DCM (top), and with DCM (bottom).
5,118.4
2013-12-30T00:00:00.000
[ "Engineering", "Physics" ]
Cardiac Adiposity and Arrhythmias: The Role of Imaging Increased cardiac fat depots are metabolically active tissues that have a pronounced pro-inflammatory nature. Increasing evidence supports a potential role of cardiac adiposity as a determinant of the substrate of atrial fibrillation and ventricular arrhythmias. The underlying mechanism appears to be multifactorial with local inflammation, fibrosis, adipocyte infiltration, electrical remodeling, autonomic nervous system modulation, oxidative stress and gene expression playing interrelating roles. Current imaging modalities, such as echocardiography, computed tomography and cardiac magnetic resonance, have provided valuable insight into the relationship between cardiac adiposity and arrhythmogenesis, in order to better understand the pathophysiology and improve risk prediction of the patients, over the presence of obesity and traditional risk factors. However, at present, given the insufficient data for the additive value of imaging biomarkers on commonly used risk algorithms, the use of different screening modalities currently is indicated for personalized risk stratification and prognostication in this setting. Cardiac Adiposity Pathophysiology Overwhelming evidence supports the idea that adipose tissue acts as an endocrine organ having a significant impact on cardiovascular function [1]. Obesity is associated with adipose tissue dysfunction including increased proinflammatory and decreased antiinflammatory factors secretion, thus contributing to insulin resistance, glucose intolerance, hypertension and abnormal lipid metabolism that are often seen in obese people [1,2]. These alterations affect the heart and vessels resulting in an increase in cardiovascular (CV) events. This risk is significantly linked to the distribution of fat rather than body mass index (BMI) or total adiposity, being much higher in the presence of visceral adipose tissue (VAT) and increased ectopic fat accumulation in normally lean organs, such as the liver, heart and skeletal muscles [2][3][4]. Increased cardiac fat depots are metabolically active tissues having a pronounced proinflammatory nature, which is enhanced in obesity and type-2 diabetes [3]. Ectopic cardiac fat may be located pericardially (the adipose tissue surrounds the parietal pericardium), Echocardiography, is a safe, easily reproducible method, which can measure fat thickness in front of the free right ventricle wall, in the parasternal long and short axis views ( Figure 1) [14]. Difficulties in calculating the whole EAT volume and distinguishing the EAT from PAT or pericardial effusion, are the main disadvantages of the method. A cut-off value >5 mm for EAT thickness has been correlated with increased CV risk [34][35][36][37]. Cardiac CT has been increasingly used for assessment of EAT/PAT ( Figure 2) [15]. A radiodensity threshold, of −190 to −30 Hounsfield (HU) units on non-contrast scans and −190 to −3 HU on contrast enhanced CT scans is accurate and reproducible for diagnosis and quantification of EAT volume [16]. In addition to EAT volume, quantification of CTderived fat attenuation has been correlated with local and systemic inflammatory markers, reflecting unfavorable metabolic activity [17]. In the presence of increased inflammation, higher CT attenuation of EAT is expected. Furthermore, CT can provide information Cardiac CT has been increasingly used for assessment of EAT/PAT ( Figure 2) [15]. A radiodensity threshold, of −190 to −30 Hounsfield (HU) units on non-contrast scans and −190 to −3 HU on contrast enhanced CT scans is accurate and reproducible for diagnosis and quantification of EAT volume [16]. In addition to EAT volume, quantification of CTderived fat attenuation has been correlated with local and systemic inflammatory markers, reflecting unfavorable metabolic activity [17]. In the presence of increased inflammation, higher CT attenuation of EAT is expected. Furthermore, CT can provide information about inflammation of EAT tissue in conjunction with positron emission tomography (PET) [18]. Concurrently, CT provides information about calcification of the coronary arteries and coronary stenoses while its main disadvantage is the exposure to ionizing radiation and nephrotoxicity induced from the contrast material [19,20]. Furthermore, CT can evaluate arterial inflammation in combination with positron emission tomography (PET/CT). In two population-based studies using CT, the Framingham Heart study and the Multi-Ethnic Study of Atherosclerosis, EAT/PAT has been identified as an independent risk predictor for CV disease in the general population [21][22][23]. In keeping with these results, other studies demonstrated that CT-derived EAT/PAT was significantly correlated with high atherosclerotic burden of underlying coronary arteries, incident myocardial infarction and atrial fibrillation (AF) development [24,25,38]. CMR is a noninvasive imaging modality without radiation, able to provide biventricular function assessment, and both tissue characterization and highly reproducible, three-dimensional EAT measurements [26,27]. Assessment of EAT volume does not require the use of gadolinium-based contrast agents and is usually quantified by cine brightblood steady-state free-precession (SSFP) sequences. Currently, hydrogen proton (1 H) magnetic resonance spectroscopy (MRS) is considered the clinical reference standard for quantifying myocardial triglyceride content, without the need for contrast agents or radionuclides [28]. Spectroscopy can distinguish between multiple myocardial triglycerides, water and creatine based on their different resonance frequencies during 1H-MRS [29]. The spectroscopic volume of interest is usually positioned within the interventricular septum and the spectroscopic signals are acquired with cardiac triggering at end systole. Myocardial steatosis is quantified as the myocardial triglyceride content relative to water or creatine. In addition, newer CMR techniques such as multiecho Dixon-like methods that rapidly obtain fat and water separated images from the region of interest, in a single breath-hold, avoiding contamination from EAT, are also useful tools for this purpose [28,30]. Using the in-phase/out-of-phase cycling of fat and water, water only and fat only images can be created ( Figure 3) [30]. This method can also be combined with a variety of sequence types (spin echo, gradient echo, SSFP sequences) and weightings (T1, T2 and proton density). Myocardial fatty infiltration has been linked with diastolic dysfunction, dilated cardiomyopathy and arrhythmogenic right ventricle cardiomyopathy (ARVC) [26,31,32]. Concurrently, EAT/PAT, as assessed by CMR, has been associated with the extent and severity of coronary atherosclerosis, impaired left ventricle (LV) systolic function and myocardial fibrosis in CMR studies [28,33]. Cardiac CT has been increasingly used for assessment of EAT/PAT ( Figure 2) [15]. A radiodensity threshold, of −190 to −30 Hounsfield (HU) units on non-contrast scans and −190 to −3 HU on contrast enhanced CT scans is accurate and reproducible for diagnosis and quantification of EAT volume [16]. In addition to EAT volume, quantification of CTderived fat attenuation has been correlated with local and systemic inflammatory markers, reflecting unfavorable metabolic activity [17]. In the presence of increased inflammation, higher CT attenuation of EAT is expected. Furthermore, CT can provide information about inflammation of EAT tissue in conjunction with positron emission tomography (PET) [18]. Concurrently, CT provides information about calcification of the coronary arteries and coronary stenoses while its main disadvantage is the exposure to ionizing radiation and nephrotoxicity induced from the contrast material [19,20]. Furthermore, CT can evaluate arterial inflammation in combination with positron emission tomography (PET/CT). In two population-based studies using CT, the Framingham Heart study and the Multi-Ethnic Study of Atherosclerosis, EAT/PAT has been identified as an independent risk predictor for CV disease in the general population [21][22][23]. In keeping with these results, other studies demonstrated that CT-derived EAT/PAT was significantly correlated with high atherosclerotic burden of underlying coronary arteries, incident myocardial infarction and atrial fibrillation (AF) development [24,25,38]. ated ( Figure 3) [30]. This method can also be combined with a variety of sequence types (spin echo, gradient echo, SSFP sequences) and weightings (T1, T2 and proton density). Myocardial fatty infiltration has been linked with diastolic dysfunction, dilated cardiomyopathy and arrhythmogenic right ventricle cardiomyopathy (ARVC) [26,31,32]. Concurrently, EAT/PAT, as assessed by CMR, has been associated with the extent and severity of coronary atherosclerosis, impaired left ventricle (LV) systolic function and myocardial fibrosis in CMR studies [28,33]. The advantages, limitations and clinical implications of different screening modalities in imaging cardiac adiposity are summarized in Table 1. Pathophysiological Mechanisms of AF AF is the most common clinically relevant arrhythmia. The mechanisms of AF are complex and multifactorial, involving an interaction between initiating triggers, an abnormal atrial substrate and a modulator such as a vagal or sympathetic stimulation [39][40][41]. The triggering of premature atrial contractions by beats that arise especially from one or more pulmonary veins and less frequently from other parts of the atria, may initiate AF while the repetitive firing of these focal triggers may contribute to the perpetuation of the arrhythmia [41,42]. The PVs play an important role in the arrhythmogenesis of AF through the mechanism of automaticity, triggered activity and reentry. Once the arrhythmia has been triggered, different theories, including the multiple wavelet hypothesis and rotors model, have been suggested to explain the maintenance of AF [43,44]. In the first The advantages, limitations and clinical implications of different screening modalities in imaging cardiac adiposity are summarized in Table 1. Pathophysiological Mechanisms of AF AF is the most common clinically relevant arrhythmia. The mechanisms of AF are complex and multifactorial, involving an interaction between initiating triggers, an abnormal atrial substrate and a modulator such as a vagal or sympathetic stimulation [39][40][41]. The triggering of premature atrial contractions by beats that arise especially from one or more pulmonary veins and less frequently from other parts of the atria, may initiate AF while the repetitive firing of these focal triggers may contribute to the perpetuation of the arrhythmia [41,42]. The PVs play an important role in the arrhythmogenesis of AF through the mechanism of automaticity, triggered activity and reentry. Once the arrhythmia has been triggered, different theories, including the multiple wavelet hypothesis and rotors model, have been suggested to explain the maintenance of AF [43,44]. In the first theory, multiple wavelets randomly propagate through the atrial tissue in different directions, detected as complex fractionated electrograms by mapping catheters. In the second theory, the AF is contributed to reentrant electrical rotors, which are identified as wavelets with rotational activity around a structural or functional center detected by spectral analysis of high-frequency sites via intracardiac mapping catheters. Increasing evidence supports the role of the autonomic nervous system in the initiation and maintenance of AF through the ganglionic plexuses commonly located on the left atrium in close proximity with epicardial fat pads [45]. Both parasympathetic and sympathetic stimulation enhance propensity to AF, the first by shortening the effective refractory period, whereas the second facilitating induction of AF and automaticity in focal discharge. The role of ablation of ganglionated plexi as an adjunctive procedure in the treatment of AF remains to be determined [46]. The development of AF induces a slow but progressive process of atrial substrate abnormalities involving electrical and structural alterations [47]. These changes facilitate electrical reentrant circuits or triggers, which, in turn, increase the propensity for the development and maintenance of the arrhythmia. Electrical remodeling includes shortening of the atrial action potential duration and increased dispersion of refractoriness largely due to downregulation of the L-type Ca 2+ inward current and upregulation of inward rectifier K+ currents, while heterogeneity in the distribution of intercellular gap junction proteins such a connexin 40 or 43 has been linked with slower conduction velocity, which favors reentry [39,[48][49][50][51]. Over time, the presence of AF also leads to structural changes including, hypocontractility, fatty infiltration, inflammation, atrial dilatation and stretch-induced atrial fibrosis which is the hallmark of structural remodeling of AF and is considered especially important substrate for AF perpetuation [52][53][54]. Experimental and clinical data indicate that inflammation is particularly involved in the initiation and maintenance of AF and conversely AF can further promote inflammation [55,56]. Although, the precise mechanistic links remain unclear, several effects of inflammation seem to be mediated by oxidative stress [57]. Various inflammatory biomarkers including C-reactive protein (CRP), IL-6, TNF-α, and MCP-1 are associated with AF risk [58,59]. It has been suggested that TNF-α, IL-2 and platelet-derived growth factor can provoke abnormal triggering in PVs and shortening of atrial action potential duration through regulation of calcium homeostasis, as well as induce atrial fibrosis, connexin dysregulation and apoptosis leading to increased conduction heterogeneity [55]. However, their clinical utility in guiding AF management is not well established [56,58]. Cardiac Adiposity and AF Even though cardiac fat depots encompass a small minority of total body fat, their proximity with cardiac structures has raised great interest whether they can play an additional role in the modulation of biochemical and metabolic triggers leading to AF. Increasing evidence supports a potential role of EAT/PAT as a determinant of the substrate of AF as well as a modulator and/or trigger (Table 2) [60]. Furthermore, fatty infiltrates provide a substrate (class IVf) for arrhythmia genesis according to European Heart Rhythm Association consensus [61]. The underlying mechanism linking EAT/PAT and AF appears to be multifactorial with local inflammation, fibrosis, adipocyte infiltration, electrical remodeling, autonomic nervous system modulation, oxidative stress and gene expression playing interrelating roles [62]. over 3000 patients, after adjustment for AF risk factors, including BMI [63]. EAT/PAT has also been associated with AF severity and left atrial volume, and was an adverse prognostic marker for AF recurrence after catheter ablation, as determined by various imaging modalities including CMR and echocardiography [64][65][66][67][68]80,81,83,87]. Specifically, in studies using CT EAT/PAT volume was larger in AF patients and was independently associated with paroxysmal and persistent AF, while EAT volume and thickness of periatrial EAT were related to the chronicity of AF [64,69]. Consistently, periatrial EAT volume was a predictor of new-onset AF in patients with CAD and postoperative AF in patients undergoing coronary artery bypass grafting [70,71]. EAT volume has been associated with negative ablation outcomes, although this was not confirmed in a very recent hybrid AF ablation study, signifying that further research is required to clarify the effect of EAT on these procedures [88]. Additionally, EAT thickness, as assessed by echocardiography, was useful in predicting adverse CV events, and could provide incremental value for CV outcome prediction over traditional clinical and echocardiographic parameters in AF [84]. There is increasing evidence supporting a close association between EAT/PAT and inflammation in CT-derived studies. Thus, inflammatory activity of EAT, reflected by glucose metabolism in PET/CT, was significantly and strongly linked with AF [72]. In line with this, inflammation of local periatrial EAT, as expressed by higher CT-density, was related to the presence of paroxysmal AF compared to controls [73]. Moreover, increased EAT volumes and elevated levels of inflammatory makers, such as CRP and interleukins, were noted in persistent AF rather than paroxysmal AF patients [74]. Additionally, another study showed that samples of pericoronary, periventricular and periatrial EAT, obtained from patients paired for CV risk factors, CAD and AF, appeared to have varying pro-inflammatory properties dependent on its anatomical location, underscoring that imaging assessment of each EAT compartment might add value in the risk of AF and CAD [89]. Finally, given that obesity is a well-established risk factor for AF and is associated with EAT/PAT, together with a growing body of evidence linking inflammation with pathogenesis of AF, indicate a potential interaction between local and systemic inflammation in the increasing prevalence of AF [90][91][92][93]. Moreover, cardiac adiposity can play a role on atrial electrophysiology, promoting functional heterogeneity, which contributes to conduction abnormalities. Complex fractionated atrial electrograms and high dominant frequency sites, both playing an important role in the maintenance of AF, were close related with CT-derived EAT/PAT volume and locations which are frequent targets for AF catheter ablation [75,76]. This correlates with the fact that the presence of EAT/PAT in CT or CMR was linked with alterations in atrial conduction, such as slower conduction velocity, prolonged cardiomyocyte field potential duration, greater complexity of activation patterns, lower bipolar voltage and electrogram fractionation [77][78][79]82]. Additionally, EAT/PAT may affect arrhythmogenesis by triggering sympathetic tone through the adrenergic and cholinergic nerves it contains, and by promoting fibrosis, which play a central role in the AF-pathophysiology, via cytokines and growth factors secretion [94]. Of note, in patients with and without heart failure echocardiographic EAT thickness was related to sympathetic nervous system imbalance, as detected by myocardial scintigraphy, impaired heart rate variability and heart rate turbulence parameters [85,86]. Recently, additional insights into the impact of EAT on the atrial substrate for AF have emerged from the correlation of local CT-EAT volume with histological atrial fibrosis, an effect that can be attributed to an EAT-cardiomyocyte paracrine axis [77]. Finally, intramyocardial fat has also been associated with supraventricular arrhythmias. Fatty infiltrates, which are common atrial histological findings, may become fibrotic under specific disease conditions, affecting the myocardial remodeling processes involved [95]. Fibro-fatty infiltration of the subepicardium has been recognized as an important determinant of the substrate of AF [96,97]. Cardiac Adiposity and Ventricular Arrhythmias Although the link of EAT/PAT with AF is strong, its relation with ventricular arrhythmias currently remains insufficiently validated. In contrast, there is association between fatty infiltration of the myocardium and cardiomyopathies. Usually this subset of patients have significant local and diffuse fibrosis, proinflammatory states, and comorbidities that predispose them to arrhythmias. Intramyocardial fat has been connected with ventricular arrhythmogenesis in obese adults, genetic disorders, such as arrhythmogenic right ventricular cardiomyopathy, myotonic dystrophy, Fabry's disease, as well as healed myocardial infarction and systolic heart failure (Table 3) [98,99]. Healed myocardial infarction Myocardial fat infiltration is associated with: -scar age and size, -lower bipolar and unipolar amplitudes fragmented electrograms colocalization with critical VT isthmuses adverse outcomes including postablation VT recurrence and all-cause mortality [108,109] • Myocardial fat infiltration is associated with: -larger infarcts adverse LV remodeling sustained VT, HF hospitalization and all-cause mortality [110] • PAT is associated with: -postablation VT recurrence [111] HF • PAT is associated with: -development of VT/VF and mortality in patients with systolic HF [112] • Myocardial fat infiltration is related with: -LV global function and fibrosis volume, in patients with DCM [113] EAT thickness is a predictor of: -clinical events and arrhythmic events (VT/VF and AF) [114] Other conditions RV-PAT is associated with the frequency of VPBs [115] EAT thickness is associated with: -prolonged QTc interval in hypertensive pts and in general population the frequency of VPBs in pts -Without structural heart disease impaired post-exercise HRR in obese pts with obstructive sleep apnea -VPBs ablation failure - [116][117][118][119][120] Abbreviations Reentry is the responsible mechanism for most ventricular arrhythmias, while focal mechanism, probably through triggered activity arising from either early or delayed afterdepolarizations without evidence of reentry, may also contribute to ventricular arrhythmias [121,122]. Multiple factors including underlying structural myocardial disease, mechanical factors such as increased wall stress and LV dilation, neurohormonal factors via sympathetic nervous and renin-angiotensin systems activation as well as myocardial ischeamia lead to alteration of electrophysiogical milieu, including changes in conduction and refractoriness and enhanced automaticity. Arrhythmogenic Right Ventricular Cardiomyopathy (ARVC) ARVC is a hereditary cardiomyopathy, characterized by fibrofatty replacement of the ventricular myocardium, with the right ventricle (RV) being predominantly affected, although left or biventricular forms have been also described [123]. The altered histopathological substrate predisposes these patients to ventricular arrhythmias and sudden cardiac death. CMR is considered the preferred imaging modality, being able not only to quantify biventricular function, but more importantly to assess myocardial tissue abnormalities, such as intramyocardial fat infiltration, oedema and fibrosis. Although fibrosis and/or fibrofatty replacement of myocytes by LGE is the pathologic hallmark of ARVC, these findings are not included in the 2010 revised Task Force Criteria (TFC) for the diagnosis of ARVC, because of concerns about their subjectivity, specificity and reproducibility [124]. Even though the direct assessment of RV tissue composition by CMR is challenging, technical advances in imaging, such as the cine-SSFP techniques, may provide better characterization of fatty content and contribute to a better stratification of arrhythmic risk in ARVC patients [123]. Thus, fatty infiltration was associated with advanced RV structural disease in patients that fulfilled major TFC-CMR imaging criteria and who were at the highest arrhythmic risk [103]. Of note, cardiac steatosis was also found in a minority of patients with partial TFC imaging criteria, suggesting a potential role for diagnosis and reclassification of patients who would otherwise not meet current CMR imaging criteria. These findings were further expanded when the involvement of left ventricle (LV) was considered in this disease setting. Recently, LV intramyocardial fat was detected in more than half of ARVC patients, was mostly located in the same regions of fibrotic deposition and was negatively related with the severity of LV systolic impairment [104,105]. Concomitantly, LV fat infiltration in combination with LV wall motion abnormalities and LGE could independently predict the major combined endpoint of sudden cardiac death, aborted cardiac arrest, and appropriate cardioverter-defibrillator implantation in ARVC patients [106,107]. LV involvement also allowed a reclassification of 5-year risk of events compared with the ARVC score. The above mentioned studies highlight the need for further research to examine the potential additive utility of adiposity and/or fibrosis in ARVC patients that are in an early stage of the cardiomyopathy. In addition, CT has also been used for depiction of fatty infiltration within the thin RV wall, due to its high spatial resolution, combined with the high native contrast of adipose tissue [123]. Intramyocardial fat burden was correlated with RV dysfunction and VT substrate, such as conduction and repolarization disturbances, in ARVC [100,101]. A vast majority of the local abnormal ventricular activities were located around the border of the RV fat segmentation, indicating that the interrogation of CT with 3-dimensional electroanatomic mapping could demonstrate ablation targets. Finally, EAT was an indicator of the degree of myocardial disease progression in ARVC, since it was related to the severity of structural disease in the RV [102]. Healed Myocardial Infarction Histological and imaging studies have revealed that intramyocardial fat deposition is located frequently in post-infarcted ventricular myocardium during a healing process called lipomatus metaplasia [125][126][127]. An association between lipomatus metaplasia and abnormal ventricular electrophysiology has been reported in both animal and clinical studies [110,128]. In this regard, electrophysiological studies demonstrated that lipomatus metaplasia, as depicted by CMR or CT, was strongly associated with scar age and size, lower bipolar and unipolar amplitudes and critical ventricular tachycardia circuit sites in patients with ischemic cardiomyopathy, suggesting its potential role in the generation of scar-related VT circuits in this setting [108,109]. Fragmented and isolated electrograms were also more frequently observed in areas with fat. Importantly, intramyocardial adipose, predominantly detectable within the subendocardial layer of scar area with variable transmural extent, was a significant predictor of sustained ventricular arrhythmia, heart failure hospitalizations and all-cause mortality in patients with history of myocardial infarction [110]. These results expand the findings of histological studies where intramyocardial adiposity was associated with significantly altered ventricular electrophysiology and increased propensity for VT after MI, whereas there was an inverse link with myocardial viability [128,129]. Myocardial fat was associated with altered electrophysiological properties and VT circuit sites in patients with ICM. Recently, it has become evident that EAT, as documented using CMR or CT, was an independent predictor of VT recurrence and all-cause mortality following ablation, highlighting the role of this imaging biomarker for risk stratification post-ablation [111]. Consistently, the different electrophysiological properties of VT substrate according to the presence of fat was also confirmed. Heart Failure (HF) and Other Conditions EAT/PAT is increased in patients with LV hypertrophy, diastolic dysfunction, and heart failure with mid-range and preserved ejection fraction, whereas regression of EAT has been reported in advanced heart failure [130][131][132][133][134]. However, the presence of EAT/PAT seems to be associated with ventricular arrhythmias in the setting of heart failure with reduced ejection fraction. Thus, CMR-derived PAT was related with the development of ventricular tachycardia/fibrillation and mortality in patients with systolic HF [112]. In line with this, recently, echocardiographic assessment of EAT was a strong predictor of both clinical and arrhythmic events, including ventricular tachycardia/fibrillation and AF [114]. Furthermore, intarmyocardial fat was significantly related to LV global function and fibrosis volume in patients with dilated cardiomyopathy, indicating that it may be a stronger marker of disease prognosis [113]. Moreover, EAT/PAT was independently associated with prolonged QTc interval and frequent ventricular premature beats in different subgroups of patients, indicating the arrhythmogenic potential of cardiac adiposity [115][116][117][118]. Additionally, EAT was an independent marker of impaired heart rate recovery, a noninvasive index of autonomic nerve dysfunction in obese patients with obstructive sleep apnea, portending poor cardiovascular prognosis in obese patients [119]. Finally, echocardiography-derived EAT thickness was higher in patients with premature ventricular contraction ablation failure [120]. EAT/PAT as a Therapeutic Target Given its relation to metabolic dysregulation, inflammation, free fatty acid delivery and glucose resistance, EAT/PAT has become a therapeutic target for life style modifications and pharmacological therapies modulating fat, as well as those improving glucose control. Emerging evidence shows that EAT may be reduced by diet, exercise, bariatric surgery, statins and antidiabetic therapies including, glucagon-like peptide-1 (GLP-1) analogues and sodium-glucose co-transporter inhibitors (SGLT2is) . However, it is not known yet whether a reduction in EAT volume can be translated into clinically relevant reduction in cardiovascular risk. In particular, recent studies have shown that exercise training may be a means to specifically target cardiac adipose tissue, as exercise led to a reduction in EAT/PAT volume ranging from 5% to 32%, even in the absence of weight loss [135][136][137][138]. Accordingly, significant reductions in both EAT/PAT volume and total cardiac adipose tissue volume have been reported following dietary restrictions and bariatric surgery [139][140][141]. Nevertheless, given that the two latter modalities compared with exercise have larger effects on body loss than on VAT reduction in obese people, it is likely that they are not optimal to target EAT [142]. In regard to pharmaceutical interventions, significant reductions in EAT/PAT volume was found following administration of atorvastatin in patients with AF, while statin therapy significantly reduced both EAT thickness and its inflammatory status in fat samples obtained from patients undergoing cardiac surgery [143][144][145]. Furthermore, liraglutide, a GLP-1 analogue that has been shown to reduce CV mortality, caused an almost 40% reduction in EAT/PAT among type 2 diabetic patients, underscoring that the cardioprotective effects of this drug could be potentially mediated through the reductions in EAT [146]. Accordingly, SGLT2is prevent CV deaths and HF events regardless of the presence or absence of diabetes [147]. It remains unknown how SGLT2is exert such beneficial effects on CV diseases, since SGLT2 is not expressed in cardiomyocytes [148]. A theory is that SGLT2is have a salutary effect through increased lipolysis in adipose tissue by reducing plasma glucose levels, leading to increased free fatty acids delivery to the heart while reducing the EAT depot [136,148]. The effect of SGLT2is on EAT/PAT has been investigated only recently. Thus, EAT thickness and/or volume was significantly decreased by dapagliflozin, canagliflozin, ipragliflozin and luseogliflozin, suggesting a drug class effect [149][150][151][152][153][154][155]. Of note, recent studies have reported that dapagliflozin (a) improved the differentiation of epicardial adipocytes, (b) benefited wound healing in endothelial cells, (c) reduced EAT volume, (d) decreased secretion of proinflammatory chemokines and e) of P-wave indices, such as P-wave dispersion [150,151]. The changes in P-wave indices were especially associated with changes in EAT volume [150]. Although, EAT/PAT shows promise as a modifiable cardiac risk factor, there are still several aspects to be clarified and more tailored therapeutic strategies, related to inflammation and metabolic dysfunction, to be investigated, before we understand whether EAT will guide future clinical decision-making. Future Perspectives Current imaging modalities have provided valuable insight into the relationships between cardiac adiposity and arrhythmogenesis, in order to better understand the pathophysiology and improve risk prediction and re-stratification, over and above the presence of obesity and traditional risk factors, especially in patients who are considered to be at intermediate risk. However, at present, given the insufficient data for the additive value of imaging biomarkers on commonly used risk algorithms, the use of different screening modalities currently is indicated for personalized risk stratification and prognostication in this setting. Furthermore, a qualitative evaluation of adipose tissue next to its quantification may be more clinically relevant. Thus, the evaluation of cardiac metabolism and detection of tissue inflammation by newer imaging methods, such as 31-phosphate MRS, hyperpolarized 13C MRS and CT-derived fat attenuation index, may give more information for the arrhythmogenic substrate at an early stage [156][157][158]. Moreover, the application of PET, using a variety of tracers that can quantify fatty acid, oxygen, glucose, and lactate uptake, may further stimulate research for the evaluation of cardiac metabolism in arrhythmia genesis [159]. Imaging biomarkers may also guide therapeutic strategies targeting cardiac fat depots and monitor responses to treatment [136]. Nevertheless, it is not yet known whether reducing cardiac fatty depots will also differentiate the arrhythmogenic substrate and reduce the risk of developing arrhythmia. Conclusions Although there is extensive experimental, imaging and clinical evidence that cardiac adiposity is an important modulator of arrhythmogenicity, mainly of AF, several aspects need clarification. Variable strengths of causative relationship have been suggested by many screening studies including different populations, different disease stages and different fat locations (periatrial, periventricular, perivascular) and indexes (volume, thickness). In addition, EAT and PAT are often not discriminated on screening modalities. Moreover, a standardized imaging measurement protocol and threshold values for different subgroups with comorbidities (hypertension, diabetes, obstructive sleep apnoea) are still lacking. Future research will enhance our understanding about the diagnostic and prognostic significance of multimodality imaging of cardiac adiposity as a marker of arrhythmias and whether it may contribute to the management of at-risk or affected patients. Author Contributions: All authors contributed significantly to conception of the work, drafting and critical revision of the manuscript. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The other authors declare no conflict of interest.
6,783
2021-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Socio-economic inequality of immunization coverage in India To our knowledge, the present study provides a first time assessment of the contributions of socioeconomic determinants of immunization coverage in India using the recent National Family Health Survey data. Measurement of socioeconomic inequalities in health and health care, and understanding the determinants of such inequalities in terms of their contributions, are critical for health intervention strategies and for achieving equity in health care. A decomposition approach is applied to quantify the contributions from socio-demographic factors to inequality in immunization coverage. The results reveal that poor household economic status, mother's illiteracy, per capita state domestic product and proportion of illiterate at the state level is systematically related to 97% of predictable socioeconomic inequalities in full immunization coverage at the national level. These patterns of evidence suggest the need for immunization strategies targeted at different states and towards certain socioeconomic determinants as pointed out above in order to reduce socioeconomic inequalities in immunization coverage. JEL Classification: I10, I12 Background The distributive dimension of health or health inequality has become prominent on global health policy agenda, as researchers have come to regard average health status as an inadequate summary of country's health performance [1]. Socioeconomic inequalities in child health are a major concern in developing countries to achieve the Millennium Development Goals set forth by the United Nations [2]. Yet progress towards achieving goals in reducing socioeconomic inequalities in child health may have been stymied by a critical gap in documenting and understanding trends in socioeconomic inequality in child health indicators particularly in less developed countries (endnote a). While many cross sectional studies have been performed, relatively little evidence is available regarding how socioeconomic inequalities in health have changed over time as the development process unfolded and levels of urbanization rose, women's educational attainment improved, infrastructure spread, and income and wealth increased; however, few studies have shown that socioeconomic disparities in health have in fact increased (endnote b). In developing countries, gaps in health-related outcomes between the rich and the poor are large [3][4][5][6][7]. These gaps limit poor peoples' potential to contribute to the economy by reducing their capacity to function and live life to the fullest -and even to survive. The study of poor-rich inequalities in health status should not, however, solely aim to quantify their magnitude. Research should also aim to identify which population subgroups are the most disadvantaged. For this purpose, it should be possible to identify the determinants of inequalities, including those associated with age, gender, education, occupation etc. These variables have previously been identified as powerful sources of health inequalities in low and middle income countries [8,9]. A growing number of studies have examined inequalities in immunization coverage by household economic status in developing countries like India [10][11][12][13][14]. Many studies have assessed the level of socioeconomic inequalities in health using concentration indices and concentration curve. Though the values of concentration indices (CIs) show the degree of socio-economic inequality, it does not highlight the pathways through which inequality occurs. Decomposition of inequalities is critical to explore pathways of socioeconomic inequalities in child health. Moreover the full immunization coverage rate has only increased from 71% in 1992 to 80% in 2006 in India (Figure 1). There is a little progress from wave 2 to wave 3 of the National Family Halth Survey i.e. period from 1998-99 to 2005-06. Children not fully immunized have just declined by two percentage point i.e. from 58% to 56%. So, an intensive study is required to assess such disappointing progress in full immunization coverage. To our knowledge, there has been virtually no study that attempted decomposition of health inequalities in the Indian context to understand such pathways. Moreover, this study also considered state level covariates along with household/individual level variables to examine the degree of contribution to the total socio-economic inequality in full immunization coverage. Given the methodological developments and the policy relevance, an attempt has been made in the present study for the first time to decompose health inequalities in terms of immunization coverage in Indian. The objective of this study is two-fold: first is to use a concentration index to quantify the socioeconomic distribution of child not fully immunized and; second is to decompose these inequalities by quantifying the contribution attributable to both household/individual covariates (i.e. economic status, education of mother, caste, residence, birth order and sex of the child) and state specific variables (i.e. poverty ratio, per-capita state domestic product, Income inequality measured in terms of Gini coefficient, % of public health spending of the total health spending, % of illiterate, % of Scheduled Tribe/ Scheduled Caste population). Methods Similar to previous studies initiated by Wagstaff et al. [15] we use the concentration index as our measure of relative socioeconomic inequality in immunization coverage. A concentration curve L(s) plots the cumulative proportion of the population (ranked by socioeconomic status (SES), beginning with lowest SES) against the cumulative proportion of children not being fully immunized. If L(s) coincides with the diagonal everyone is equally off. However, if L(s) lies above the diagonal, then inequality in coverage exists and favors those with high SES. The further L(s) lies from the diagonal, the greater the degree of inequality. The concentration index, C, is defined as twice the area between L(s) and the diagonal and takes a value of 0 when everyone is equally of regardless of SES. The minimum and maximum values of C are -1 and +1, respectively; these occur in the (hypothetical) situation where immunization is concentrated in the hand of the least disadvantaged and the most disadvantaged person, respectively. Thus, the larger negative value of C, the more the absence of full immunization concentrates among low SES groups. A computational formula for C, which allows for application of sample weights was given by Kakwani et al. [16] as weighted mean of the sample, i.e. the weighted proportion not fully immunized, N the sample size, y i an indicator for not being fully immunized, Wi the sample weight of the individual (which sums to N) and R i the fractional rank defined according to Kakwani et al. as i.e. the weighted cumulative proportion of the population up to the midpoint of each individual weight. Following the same authors, C can be conveniently computed as the weighted covariance of y i and R i , i.e. A straightforward way of decomposing the predicted degree of inequality into the contributions of explanatory factors was proposed by Wagstaff et al. [17]. Adapting their approach to the present case, where the health indicator is a binary variable and a logit regression specification thus applied, amounts to specifying l(p i ) = k β k x ki , where l(p i ) is the logit of the predicted probability of not being fully immunized and b k the logit regression coefficient for the health determinant x k . Given this linear relationship, the concentration index for l(p i ) can be written asĈ = k β kxk μ C k , whereμ is the mean of l(p i ),x k the mean of x k and C k the concentration index of x k (defined analogously to C). While b k measures the relationship between the health determinant x k and the logit l(p i ), a more intuitive expression of the relationship between the health determinant and the probability p i is the marginal effect m k = λ( k β kxk )β k , where l() is the logit density function. Specifically, m k expresses the average change in the probability of not being fully immunized when the health determinant x k changes one unit. In order to assess sampling variability and to obtain standard errors for the estimated quantities, where in particular the concentration indices and the contributions, i.e. the β kxk μ C k parts, cause troubles, we apply a "bootstrap" procedure [18,19] in a five-step manner much similar to van Doorslaer and Koolman [20]: First, sample size is inflated to allow for differences in sampling probability by dividing the sampling weights with the smallest weight and rounding to nearest integer. Second, from this expanded sample a random sub-sample of the size of the original sample is drawn with replacement. Third, the entire set of calculations as specified above are performed on this sample. Fourth, this whole process is repeated 1,000 times, each leading to replicate estimates. Fifth, using the obtained 1,000 replicates, standard deviations and t statistics can be computed. If a card was available, the interviewer was required to carefully copy the dates on which the child received vaccinations against each disease. For vaccination not recorded on the card, the mother's report that the vaccination was or was not given was recorded. If the mother could not show a vaccination card, she was asked whether the child had received any vaccinations. If any vaccinations had been received, the mother was asked whether the child had received a vaccination against tuberculosis (BCG); diphtheria, whooping cough (pertussis), and tetanus (DPT); poliomyelitis (polio); and measles. For DPT and polio, information was obtained on the number of doses of the vaccine given to the child. Mothers were not asked the dates of vaccinations. Data To distinguish Polio 0 (polio vaccine given at the time of birth) from Polio 1 (polio vaccine given about six weeks after birth), mothers were also asked whether the first polio vaccine was given just after birth or later. A binary outcome variable was calculated, namely whether or not each of the live born child aged 12-23 months received all recommended doses of vaccination or not (child fully immunized = 0; child not fully immunized = 1) (endnote c). For the core analysis we considered child not fully immunized as a dependent variable to standardize the interpretation. Two sets of independent variables (household/individual and state specific) are considered for decomposition analysis. The household/individual covariates consist of economic status (poor/non poor), education of mother (illiterate/literate), caste (scheduled caste/tribe (SC/ST)/non scheduled caste/tribe), residence (rural/urban), sex of the child (male/female), and birth order (birth order < 3, birth order 3 or more). The state specific variables for decomposition analysis included: poverty ratio, per-capita state domestic product, income inequality measured in terms of Gini coefficient, % of public health spending of the total health spending, % of illiterate, and % of scheduled tribe/scheduled caste population. In the National Family Health Survey-3, an index of economic status (wealth quintile) for each household was constructed using principal components analysis based on data from 109041 households. The wealth quintiles distribution was generated by applying principal components analysis on 33 household assets (endnote d). The wealth quintile distribution was used to determine poor-rich household for subsequent modelling. For the decomposition analysis, quintiles 1 and 2, and quintiles 3, 4, and 5 were grouped together. This produced a binary variable labelled 'poor economic status', including households in the bottom 40% of economic status. Mother's education was a categorical variable with the following four levels: illiterate, primary school, guidance/high school, university. For decomposition analysis, mother's illiteracy-a binary variable-was used. Finally, the decomposition analysis is confined to twelve possible socio-economic determinants including both household/individual and state specific variables that could explain the maximum dimension of socioeconomic inequality particularly in developing countries like India. The predictor variables of interest are i) poor economic status, ii) mother is illiterate, iii) residence in rural area, iv) sex of the child (male), v) birth order of the child (birth order 3 or more) and vi) belong to scheduled caste/scheduled tribe vii) poverty ratio, viii) per-capita state domestic product, ix) income inequality measured in terms of Gini coefficient, x) % of public health spending of the total health spending, xi) % of illiterate, and xii) % of scheduled tribe/scheduled caste population. To take care of the non-equal probabilities of selection in different domains, a design weight was applied. The national level weight for women is calculated as, ; where W Di is the household design weight for the i th domain is the inverse of the sampling fraction for the i th domain (f i = n i /N i ); R Hi is the response rate of the household interviewed; R Wi is the response rate of the women interviewed. After adjustment for non response, the weights are normalized so that total number of weighted cases is equal to the total number of un-weighted cases. Table 1 presents mean values and concentration indices of the variables selected for the study together with regression coefficients and percentage contributions to inequality in immunization of the covariates. From the column of means, it is seen that about 56 percent of the children aged 12-23 months are not fully immunized in India. Furthermore, 47 percent of the children belong to poor household economic status, and a similar proportion of children have mothers who are illiterate. A majority of the children come from rural area (74 percent). The second column of Table 1 presents concentration indices for both dependent and predictive variables, which provide insights on the poor-rich distributions of immunization and the socio-economic determinants. Thus, the CI value for a child not fully immunized is -0.15021at the national level which indicates that immunization practice favors children from relatively wealthier families. Furthermore, it is seen that illiteracy of mothers, living in rural areas, belonging to scheduled cast or tribe and high birth order concentrates among the poor. Results Estimated marginal effects from the regression analysis are presented in the third column of Table 1. The marginal effects indicate the association between the determinants and child health outcome indicator. The relationship between wealth and immunization coverage is evident, as children from families with poor economic status have a 59 percent higher risk of not being fully immunized. Likewise, being a child of an illiterate mother increases the risk of not being fully immunized Table 1 with 85 percent, while the risks are 8 percent higher for children in rural areas, 35 percent higher for children of birth order 3 or more. Furthermore, percentage of public health spending of total health spending and percentage of illiterate population at the state level are positively related with the child health outcome indicator. Finally, the last column of Table 1 presents the decomposition analysis of socio-economic inequalities in full immunization coverage. It is seen that the poor household economic status contributes about 38 percent of the total socioeconomic inequalities in child immunization. A major contributor is mother's illiteracy which contributes almost 34 percent to the inequality of immunization. Other important contributors are percapita state domestic product and % of illiterate at the state level which contribute with 14 and close to 10 percent respectively. The result furthermore indicates that public health spending, income inequality and % of scheduled caste and scheduled tribe at the state level play less important role in determining the scale of health inequality in terms of child immunization. To summarize, most predictable socioeconomic inequalities seem to arise from four socio-economic predictors: poverty itself, illiteracy of mothers, per-capita state domestic product and % of illiterate person at the state level. Discussion and conclusions The study presents -to our knowledge -first time evidence on the composition of socioeconomic inequality in child health care in India in terms of children not being fully immunized. Decomposition results reveal that poor household economic status, mother's illiteracy, state domestic product and level of illiteracy at the state level contribute with about 97 percent of the total socioeconomic inequalities in full immunization coverage at the national level. Of these determinants, mother's illiteracy stands out with a contribution of about 34 percent. Furthermore, decomposition analysis of the determinants of health inequalities based on state level data, shows that neither income inequality nor the public share of health spending are significant determinants of health inequalities but per-capita state domestic product and % of illiterate population explains about 24% of the total health inequalities in full immunization coverage. Policy implications of these results may be that health intervention strategies aiming at reducing socioeconomic inequality in immunization coverage could helpfully benefit from being supplemented with strategies aiming at reducing poverty and illiteracy in particular. Finally, intensive community level analysis is required to understand the pathways of health inequalities in full immunization coverage at the state level. Endnotes a. Numerous studies have examined the effects of socioeconomic status on child health or mortality using cross-sectional data. However, few of them have extended their findings to characterize levels of inequality, using either rate ratios or, especially, more sophisticated measures of inequality. Additional complications of extracting information on trends in socioeconomic inequalities in health from cross sectional studies are that the specific measures of socioeconomic status often differ across studies, as do the number and type of other variables that are held constant [10,5,23]. b. Cleland et al. [24] found that disparities in child survival by socioeconomic status and maternal education did not narrow from the 1970s to the 1980s in a dozen of developing countries. Wagstaff's [6] reanalysis of the result from the number of studies showed that inequality in under-five mortality increased in Bolivia, from 1994 to 1998, in Vietnam from 1993 to 1998 [25], and in Uganda from 1988 to 1995 [26]. c. Fully Immunized involves received BCG, three doses of DPT and Polio, and measles vaccines. d. The 33 household asset variables are household electrification; type of windows; drinking water source; type of toilet facility; type of flooring; material of exterior walls; type of roofing; cooking fuel; house ownership; number of household members per sleeping room; ownership of a bank or post-office account; and ownership of a mattress, a pressure cooker, a chair, a cot/bed, a table, an electric fan, a radio/transistor, a black and white television, a colour television, a sewing machine, a mobile telephone, any other telephone, a computer, a refrigerator, a watch or clock, a bicycle, a motorcycle or scooter, an animal-drawn cart, a car, a water pump, a thresher, and a tractor.
4,173.4
2011-08-05T00:00:00.000
[ "Economics", "Medicine", "Sociology" ]
Development of self-fertile deletion homozygous and ditelosomic lines for the long arm of chromosome 2A in common wheat Most deletions for the short arm of chromosome 2A (2AS), and the telocentric chromosome for the long arm of chromosome 2A (2AL), are available only in the heterozygous condition in ‘Chinese Spring’ hexaploid wheat. This is due to the female sterility, and therefore self-sterility, of their homozygotes, caused by the partial or entire loss of the 2AS chromosome arm on which genes for normal synapsis and female fertility are located. On the other hand, a D-genome disomic substitution line 2D(2A) of ‘Langdon’ tetraploid wheat, in which chromosome 2D is disomically substituted for chromosome 2A, is available (i.e., self-fertile) despite chromosome 2A being missing in this line. This fact indicates that another gene for female fertility must be present in Langdon 2D(2A). We attempted to develop self-fertile 2AS homozygous deletion and ditelosomic 2AL lines by transferring this female fertility gene, through a series of crosses and cytological screening, from Langdon 2D(2A) to the two aneuploid lines. We finally obtained self-fertile 2AS homozygous deletion and ditelosomic 2AL lines. These lines displayed normal meiotic chromosome pairing and lacked all 12 of the 2AS markers used for A series of aneuploid lines were produced in the common wheat cultivar Chinese Spring (CS) (Triticum aestivum L., 2n = 6x = 42, genome constitution AABBDD) (Sears, 1954(Sears, , 1966Sears and Sears, 1978). Ditelosomic lines, in which one of the two arms of each chromosome is disomically missing, have been used to allocate genes and DNA sequences to specific chromosome arms. Common wheat has 21 different chromosomes, which are grouped into three genomes (A, B and D) and seven homoeologous groups (1 to 7). Therefore, 42 ditelosomic lines are possible in common wheat. However, six ditelosomic lines are not available (Devos et al., 1999). Sears (1954) reported that the right ( = short) arm of chromosome II ( = 2A) (2AS) carries genes for normal synapsis and female fertility, and therefore a ditelosomic line for the long arm of chromosome 2A (2AL) is not available due to female sterility (Sears and Sears, 1978). Endo and Gill (1996) also reported that most deletion homozygotes for 2AS have irregular meiosis and are almost sterile. A complete set of disomic substitution lines was devel-oped in the tetraploid wheat cultivar Langdon (LDN) (T. turgidum L. var. durum, 2n = 4x = 28, genome constitution AABB). In each of these aneuploid lines, a pair of LDN homologous chromosomes are replaced by a pair of D-genome homoeologous chromosomes that were transferred from CS (Joppa and Williams, 1988). Most of these lines are self-fertile, including LDN 2D(2A) in which LDN chromosome 2A is replaced with CS chromosome 2D. This suggested that LDN 2D(2A) has another gene responsible for female fertility on a chromosome other than chromosome 2A, and led us to the idea of developing a self-fertile line of ditelosomic 2AL in common wheat. Here we report the breeding process of selffertile 2AS homozygous deletion and ditelosomic 2AL lines by transferring the female fertility gene from LDN 2D(2A) to these CS aneuploids. Production of self-fertile ditelosomic 2AL We started the production of a self-fertile line of ditelosomic 2AL from a cross between monotelodisomic 2AL (t'2AL + 1'2A) and the self-fertile 2AS-2 homozygous line to obtain a double monotelosomic 2A plant (t'2AL + t'2AS) (see Fig. 1). This plant was self-pollinated, and nine out of 27 F 2 plants were ditelosomic for 2AL (t"2AL) (Fig. 3). Four of the ditelosomic 2AL plants were self-fertile, while the remaining five were completely self-sterile. The F 3 progeny of one of the four self-fertile ditelosomic 2AL plants were all self-fertile (10 plants examined), suggesting that the parental F 2 plant was homozygous for the female fertility gene. This F 3 progeny was selected to establish a self-fertile line of ditelosomic 2AL. The established self-fertile ditelosomic 2AL line produced a reasonable number of seeds by self-pollination (31 seeds on four spikes). Meiotic chromosome configurations of the selffertile deletion 2AS-2 homozygotes and self-fertile ditelosomic 2AL Sears (1954) reported that the 2AS chromosome arm carries genes for normal synapsis and for female fertility. Endo and Gill (1996) some deletion homozygotes for 2AS had irregular meiosis with many univalents. As expected from these previous studies, the pollen mother cells (PMCs) of the sterile 2AS-2 homozygotes showed irregular meiosis with univalents ranging from 0 to 12 (3.76 on average from 25 PMCs) (Supplementary Fig. S1B). On the other hand, the self-fertile 2AS-2 homozygotes and self-fertile ditelo-somic 2AL had normal meiotic pairing with no univalents (Supplementary Fig. S1C and S1D). PCR analysis of the self-fertile deletion 2AS-2 homozygotes and self-fertile ditelosomic 2AL We selected PCR markers, 12 for 2AS and eight for 2AL, from the microsatellites that had been used to construct a genetic map of chromosome 2A (Somers et al., 2004). The result of the PCR analysis with these markers is shown in Table 1 and Supplementary Fig. S2. All the markers were amplified in CS but three of them were not amplified in LDN, suggesting the presence of sequence differences in chromosome 2A between CS and LDN. None of the 2A markers except two of the 2AL markers were amplified in LDN 2D(2A). Two markers (gwm558 and gwm473) could not be located on chromosome 2A because chromosome 2A was missing in LDN 2D(2A). They must have been misassigned to 2A. None of the 2AS markers were amplified in the sterile and self-fertile deletion 2AS-2 homozygous lines, or in self-fertile ditelosomic 2AL, which suggested that all the 2AS markers were located distal to the breakpoint in 2AS-2. Although one of the 2AL markers (gwm372) was missing in the sterile, self-fertile 2AS-2 homozygotes and self-fertile ditelosomic 2AL, it was present in ditelosomic 2AS. This contradiction can be resolved by assuming that gwm372 is on 2AS, somewhere distal to the breakpoint of deletion 2AS-2. Thus, the centromere should be positioned between gwm372 and gwm445. Chromosomal location of the gene for female fertility The female fertility gene was transferred from LDN 2D(2A) to the self-fertile 2AS-2 homozygous lines and self-fertile ditelosomic 2AL. This gene is most likely located on a chromosome other than chromosome 2A in LDN and lost its function in CS during evolution. Chromosome 2B, a homoeologous group-2 chromosome, is a likely candidate to carry the female-fertility gene in LDN 2D(2A), although there is no direct evidence for this. The loss of multiple genes is likely to happen in hexaploid wheat, as demonstrated for the waxy genes (Yamamori et al., 1994) and for the male-fertility genes (Joshi et al., 2013). Another possibility is that the female fertility gene located on LDN chromosome 2A was transferred onto CS chromosome 2D during the production of LDN 2D(2A). This explanation sounds reasonable because Joppa and Williams (1988) reported that pairing between chromosomes 2A and 2D occurred frequently when both chromosomes were monosomic and resulted in plants carrying translocations, and that considerable effort was required to produce LDN 2D(2A) free of translocations. However, the 2D chromosome in selffertile ditelosomic 2AL did not appear to have any such translocations, i.e., no change in the C-banding pattern of chromosome 2D (Fig. 2B), and also the PCR analysis did not show any evidence of a structural change in chromosome 2D. Use of the self-fertile ditelosomic 2AL line It is unique to common wheat that ditelosomics are available for most of the chromosome arms. Ditelosomics are useful in localizing genes and DNA markers cytologically to specific chromosome arms. Cytological mapping has often corrected the orders of markers in genetic maps. For example, Joshi et al. (2013) developed a self-fertile ditelosomic 4BL line and, using that line, corrected the marker order on chromosome 4B in the previous genetic maps. So far, cytological mapping has been impossible for the 2AS arm because only ditelosomic 2AS is available and because no nullisomic 2A-tetrasomic 2B or 2D line exists. The self-fertile ditelosomic 2AL line developed in this study enables us to allocate DNA markers to the 2AS arm unambiguously. Indeed, PCR analysis in this study indicated that one of the 2AL markers (gwm372) was located on the 2AS arm (Table 1). Also, deletion mapping for the 2AS arm has been hampered by the unavailability of homozygous deletion lines for most of the 2AS deletions. Nevertheless, we can conduct deletion mapping with 2AS hemizgyous deletion plants that can be obtained by crossing the self-fertile ditelosomic 2AL line with 2AS heterozygous deletion lines.
1,964.2
2020-03-16T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Performance Analysis of IoT-Based Sensor, Big Data Processing, and Machine Learning Model for Real-Time Monitoring System in Automotive Manufacturing With the increase in the amount of data captured during the manufacturing process, monitoring systems are becoming important factors in decision making for management. Current technologies such as Internet of Things (IoT)-based sensors can be considered a solution to provide efficient monitoring of the manufacturing process. In this study, a real-time monitoring system that utilizes IoT-based sensors, big data processing, and a hybrid prediction model is proposed. Firstly, an IoT-based sensor that collects temperature, humidity, accelerometer, and gyroscope data was developed. The characteristics of IoT-generated sensor data from the manufacturing process are: real-time, large amounts, and unstructured type. The proposed big data processing platform utilizes Apache Kafka as a message queue, Apache Storm as a real-time processing engine and MongoDB to store the sensor data from the manufacturing process. Secondly, for the proposed hybrid prediction model, Density-Based Spatial Clustering of Applications with Noise (DBSCAN)-based outlier detection and Random Forest classification were used to remove outlier sensor data and provide fault detection during the manufacturing process, respectively. The proposed model was evaluated and tested at an automotive manufacturing assembly line in Korea. The results showed that IoT-based sensors and the proposed big data processing system are sufficiently efficient to monitor the manufacturing process. Furthermore, the proposed hybrid prediction model has better fault prediction accuracy than other models given the sensor data as input. The proposed system is expected to support management by improving decision-making and will help prevent unexpected losses caused by faults during the manufacturing process. Introduction Manufacturing plays an important role in economic development and is still considered crucial to economic growth in the globalization era [1,2]. It has a positive impact on the growth of both developed and developing countries [3,4]. Emerging technologies are utilized by the manufacturing industry to enhance the economic competitiveness of individual manufacturers and the sustainability of the entire industrial sector. The adoption of information and communication technology (ICT) in manufacturing enables a transition from traditional to advanced manufacturing processes [5]. Monitoring systems, as part of ICT application, play an important part in manufacturing process control and management. Recent developments in information technology enable the integration of various as well as to send a warning alert to the nearest governments and healthcare clinics to prevent further outbreaks. Finally, Bayo-Monton et al. developed an IoT-based sensor utilizing Arduino and Raspberry Pi to enhance eHealth care [59]. The performance of the proposed sensor was compared with that of a personal computer. The results confirmed that the proposed IoT-based sensor was suitable for scalable eHealth systems. Several studies have been conducted in the manufacturing industry and showed significant advantages from IoT based sensors in improving working conditions, preventing erroneous designs, providing fault diagnosis and quality prediction, and helping managers with better decision making. Moon et al. developed an IoT-based sensor to measure the air quality inside a factory [11]. Temperature, humidity, CO 2 level, dust, and odor sensor data were collected and transmitted via wireless communication. Based on the experimental results, the proposed system is robust enough, able to accurately measure the environmental condition in the factory in real-time, and is expected to help managers maintain an optimum working environment for the workers inside the factory. Salamone et al. proposed an environmental monitoring system based on low-cost IoT sensors for preventing errors during the design phase in additive manufacturing [12]. The sensors were used to gather temperature and humidity data. The study revealed that knowledge of environmental conditions could help prevent errors during the design phase in additive manufacturing. Li et al. utilized IoT sensors to collect data for the fault diagnosis of mine hoisting equipment [13]. The study revealed that IoT sensors can help provide complete diagnosis data as well as improve diagnosis results. Lee et al. proposed a framework by utilizing IoT and machine learning to predict the quality of a product and optimize operation control [14]. Metal casting was used as a real-case implementation of the proposed system. The proposed system was able to effectively predict the quality of the metal casting and efficiently improve the operation control. Finally, Calderón Godoy et al. proposed the integration of sensors and the SCADA system for implementation of the fourth industrial revolution framework [15]. Experimental results confirmed the feasibility of the proposed system, which is expected to help managers during the migration of legacy systems to the Industry 4.0 framework. The number of IoT-based sensors and other related components is increasing significantly. The adoption of IoT in manufacturing enables the transition from traditional to modern digitalized manufacturing. As the number of devices collecting sensor data in manufacturing increases, the potential for new types of applications that can handle the input of large amounts of sensor data such as big data technology also increases. Ge et al. developed a conceptual framework by integrating big data technology in IoT, which is expected to support critical decision making [60]. By utilizing big data processing, the enormous amount of data collected by many heterogeneous sources (sensor devices) can be handled and presented in an efficient manner, thus they can assist managers with better decision making. Big Data Processing With the increasing number of IoT and sensing devices, data generated from manufacturing systems are expected to grow exponentially, producing so called "big data" [16]. Big data is often described in terms of 4 V's. The first V is volume in reference to the size of the data, the second V is variety in reference to the different types/formats of the data, the third V is velocity in reference to the speed of data generation, and the last V is veracity in reference to the reliability of the data [61]. The data generated during manufacturing is increasing daily with different types and formats (i.e., process logs, events, images, and sensor data), hence, the processing and storage of these data is becoming a challenging issue that needs to be addressed. There are several applications of big data analytics in the manufacturing industry. Zhang et al. proposed a big data framework for reducing energy consumption and emission in an energy-intensive manufacturing industry [17]. The proposed system consists of two components, data acquisition for gathering the energy data and data analytics for analyzing the energy usage. Based on a real-case implementation, the results showed that the proposed system was capable of eliminating three percent of the energy consumption and four percent of energy costs. Zhong et al. proposed a big data system for logistics discovery from RFID-enabled production data for mining knowledge [18]. An experiment was used to demonstrate the feasibility of the proposed system and the results showed that the knowledge gained from big data could be used for production scheduling and logistics planning. Mani et al. studied the application of big data analytics for mitigating supply chain social risk [19]. A case study was used to elaborate the application of big data analytics in the supply chain. The results of the study revealed that big data analytics can help management predict various social problems and mitigate social risks. Finally, Li et al. proposed a big data framework for active sensing and processing of complex events in manufacturing processes [20]. To effectively process complex event big data, a relation model and unified XML-based manufacturing processes were developed. The Apriori frequent item mining algorithm was used to find a frequent pattern from the complex events data. The feasibility and effectiveness of the proposed system was confirmed with implementation in a local chili sauce manufacturing company. The proposed model is expected to provide practical guidance for management decision-making. Several big data technologies can be utilized in the manufacturing industry to process and store large volumes of data quickly, such as Apache Kafka, Apache Storm, and NoSQL MongoDB. Apache Kafka is a scalable messaging queue system used for building real-time applications [62]. It is fault-tolerant, high-throughput, and scalable. Several studies have shown significant benefits from using Kafka for healthcare, transportation, manufacturing, and IoT-generated sensor data. Alfian et al. proposed real-time data processing for monitoring diabetic patients [21]. Apache Kafka and MongoDB were utilized to handle and store sensor data from the patients. The proposed system was sufficiently efficient at monitoring diabetic patients. Ji et al. proposed a cloud-based car parking system consisting of several technologies, including Apache Kafka [63]. The proposed system was capable of efficiently handling massive amounts of sensor data when the amount of data and the number of clients increased. D'silva et al. proposed a framework for handling real-time IoT event data [22]. The proposed framework utilized Apache Kafka as a message queue system and was efficient enough to process real-time IoT events data. Canizo et al. proposed a framework based on big data technologies and machine learning for online fault prediction for wind turbines [23]. Apache Kafka was used to handle incoming data in real-time and send the data to a streaming system for further analysis. The proposed system could be used to monitor the status of wind turbines and is expected to help reduce operation and management costs. Du et al. proposed a framework for handling huge amounts of incoming unstructured connected vehicle (CV) data [24]. The proposed framework utilized Apache Kafka as a distributed message broker. Experimental results showed the proposed system is efficient enough in handling huge amounts of incoming CV data and achieved the minimal recommended latency value defined by the U.S. Department of Transportation for CV applications. Park and Chi proposed an architecture for an ingestion system based on Apache Kafka for machine logs in the manufacturing industry [25]. The proposed system collects machine logs from a set of milling machines, handles them in a Kafka messaging queue, and delivers them to an external systems for further analysis. Finally, Ferry et al. proposed a data management system based on big data technologies for machine generated data in a manufacturing shop-floor [26]. The proposed system utilizes Apache Kafka as a message queue and Apache Storm as a real-time processing system. Implementation of the proposed system is expected to reduce infrastructure and deployment costs. Apache Storm is a real-time distributed parallel system for processing high-velocity stream data [64]. It is fault-tolerant and scalable, with guaranteed data processing. Previous studies have utilized Apache Storm for real-time data processing. Ma et al. proposed a stream-based framework for providing real-time information services on public transit [27]. The proposed framework utilized Apache Storm as a real-time distributed processing engine. The results showed that the proposed framework was capable of handling large amounts of real-time data with lower latency. Furthermore, the performance of the proposed framework increased when the number of nodes/servers utilized increased. Manzoor and Morgan proposed a real-time intrusion detection system based on Apache Storm [28]. The proposed system was evaluated using the KDD 99 network intrusion dataset and the results showed that the proposed system was feasible for processing network traffic data and detecting network intrusion with high accuracy. Chen et al. proposed a real-time geographic information system for managing environmental big data using Apache Storm [29]. The proposed system was tested with two use-cases (i.e., real-time air quality monitoring and soil moisture monitoring). The results showed that the proposed system was effective enough for managing real-time environmental big data. In addition, several studies have been conducted regarding the performance of Apache Storm as a real-time data processing system. Qian et al. performed a performance comparison between Apache Storm and Spark [30]. The latency and throughput of the system was considered and the results showed that Apache Storm has shorter latency while Spark has higher throughput. Finally, Chatterjee and Morin performed comparative performance analysis between several data streaming platforms (i.e., Flink, Storm, and Heron) [31]. Various performance metrics were considered such as fault tolerance and resource usage. The results showed that Storm has better fault tolerance and less memory usage than the other systems. The increasing amount of IoT-generated sensor data has led to increased demand for sensor-friendly data storage platforms. NoSQL databases have become popular in the last couple of years because of their growing flexibility, scalability and availability. The term 'NoSQL' collectively refers to data storage platforms that do not follow a strict data model for relational databases. MongoDB is a document-oriented NoSQL database that offers flexible data-schema, high performance, scalability, and availability [65]. A previous study compared the performance of MongoDB and Oracle with insert, update, and delete tests [32]. MongoDB outperforms oracle in all tests. In addition, MongoDB has been proven to be effective for storing data from the supply chain, geographic information systems and manufacturing. Alfian et al. utilized MongoDB to store IoT-generated sensor data for monitoring a perishable food supply chain [33]. In the study, MongoDB was capable of processing a huge amount of input/output sensor data efficiently when the number of sensors and clients increased. In addition, MongoDB outperformed MySQL in read and write tests. Hu et al. conducted a comparative study among six popular databases (i.e., Rasdaman, SciDB, Spark, ClimateSpark, Hive, and MongoDB) for handling a variety of geospatial data [34]. The results showed that MongoDB was adequate in terms of parallel query and resource consumption (i.e., CPU, memory, network). Chen et al. proposed MongoSOS, a sensor observation service based on MongoDB, for handling spatiotemporal data [35]. The proposed system was capable of handling read and write access for navigation and positioning data in a millisecond and the performance improved by around two percent compared with the traditional model. Putri et al. proposed a big data processing system based on Apache Spark and MongoDB to identify profitable areas from large amounts of taxi trip data [36]. The experimental results showed that the proposed system was scalable and efficient enough in processing profitable-area queries from huge amounts of big taxi trip data. Finally, Angrish et al. proposed a flexible data schema based on NoSQL MongoDB for the virtualization of manufacturing machines [37]. The proposed system was evaluated against several query statements. The results showed that MongoDB can accommodate any type of machine data and could easily be implemented across a variety of machines on the factory floor. Previous studies have shown a significant impact from the integration of several big data technologies. Lohokare et al. proposed a scalable framework for home automation in smart cities [38]. The proposed framework utilized Apache Kafka as a message broker to handle incoming IoT data and MongoDB to store the sensor data. The proposed system was able to reduce the processing time when the amount of data and nodes increased. Jung et al. proposed a smart city system using Apache Kafka and Apache Storm to handle and process IoT-generated data in real-time [39]. Experimental results showed that the proposed system was capable of effectively and efficiently processing the IoT-generated data in real-time. Villari et al. proposed a management system for smart environments using big data technologies [40]. The proposed system utilized Apache Storm to process the data in real-time and MongoDB to store huge amounts of sensor data. A case study on smart homes was performed, and the results showed that the proposed system was able to manage large amounts of smart environmental data in real-time. Zhou et al. proposed an integration of Apache Kafka, Apache Storm, and MongoDB for processing streaming spatiotemporal data [41]. The proposed system was tested using the Taiyuan BeiDou bus location data. The proposed system was capable of processing large amounts of sensor data per second and was around three times faster than the traditional model. Finally, Syafrudin et al. proposed an open source-based real-time data processing system consisting of Apache Kafka, Apache Storm, and MongoDB [42]. The proposed system was implemented to monitor the injection molding process in real-time. The proposed system was capable of processing a massive amount of sensor data efficiently when the amount of data and the number of devices increased. Integration of Apache Kafka, Apache Storm, and MongoDB can be used for big data processing to handle manufacturing sensor data. Previous studies have shown that these three technologies can be used for big data processing so that large amounts of streaming sensor data can be promptly processed, stored, and presented in real-time [41,42]. Thus, in our study, Apache Kafka, Apache Storm, and MongoDB were utilized for big data processing to monitor the manufacturing process in real-time. In addition, the integration of big data processing with a machine-learning model is expected to help managers with decision-making and to prevent unexpected losses caused by faults during the manufacturing process. Machine Learning Methods in Manufacturing The manufacturing industry is experiencing an increase in data generation, e.g., sensor data from the production line, environmental data, etc. New developments in technology such as machine learning offer great potential to analyze data repositories, and thus can provide support for management in decision-making or can be used to improve system performance. Machine learning techniques are utilized to detect certain patterns or regularities and have been successfully implemented in various areas such as fault detection, quality prediction, defect classification, and visual inspection. Several studies have utilized machine learning and showed significant results in the manufacturing industry. Kim et al. employed seven different machine learning-based novelty detection methods to detect faulty wafers [43]. The models were trained with Fault Detection and Classification (FDC) data to detect faulty wafers. The experimental results showed that machine learning-based models had a high possibility of detecting faulty wafers. Lee et al. performed an evaluation analysis on four machine learning algorithms (i.e., decision tree, random forest, artificial neural network, and support vector machine) for predicting the quality of metal castings product [14]. The result showed that all of four machine learning algorithms can effectively be used to predict the quality of product. Chen et al. utilized support vector machine algorithm to predict the quality of welding in a high-power disk layer [44]. The results showed that the proposed quality prediction model can be used for real-time monitoring system. An intelligent system was developed by Chen et al. to minimize the incorrect warning in detecting the quality of product in manufacturing [45]. They utilized three methods (i.e., visual inspection, support vector machine, and similarity matching). Through real-case implementation in manufacturing company in Taiwan, the proposed system can effectively be used to minimize the incorrectly classified and improve the performance of quality prediction. Finally, two machine learning algorithms (i.e., decision tree and Naïve Bayes) was also used by Ravikumar et al. for automating the process of inspecting the quality of machine components [46]. Three types of machine component quality (i.e., good, minor scratch, and deep scratch) were measured. The results showed that the proposed method can effectively be used in automating the quality inspection of the product in real practical case. Fault detection and diagnosis is an important problem in process engineering and is utilized to detect abnormal events in a process. Early detection of process faults can help avoid productivity loss. Machine learning algorithms such as Random Forest showed significant efficacy in detecting process faults in manufacturing. Random Forest is an ensemble prediction method that aggregates the results of individual decision trees [66]. Generally, Random Forest works by utilizing the bagging method to generate subsets of training data. For each training dataset, a decision tree algorithm is utilized. In the end, the final prediction result is selected based on majority vote (the most voted class) over all the trees in the forest. Recently, Random Forest was used by Quiroz et al. for detecting the failure of rotor bar. They performed the performance analysis between Random Forest and other models (i.e., decision tree, Naïve Bayes, logistic regression, linear ridge, and support vector machine). The experimental results showed that Random Forest outperformed the other models and has around 98.8% of accuracy. The proposed model can be used for real-time fault monitoring system as well as the preventive maintenance system in factory. Random Forest also was utilized by Patel and Giri for detecting the failure of bearing [48]. The results were compared with those obtained from an existing artificial intelligence technique, neural network. The results showed that Random Forest had better performance and higher accuracy than the neural network algorithm. The results of this study are expected be used for bearing fault detection and diagnosis. Finally, Cerrada et al. proposed fault diagnosis in spur gears based on genetic algorithm and Random Forest [49]. The proposed system consisted of two parts, namely genetic algorithm for attribute selection and Random Forest for classification. The proposed system was tested on real vibration signals and Random Forest had better performance for fault diagnosis. Machine learning algorithms encounter problems with outlier data, which can reduce the accuracy of the classification model. Outlier detection can be utilized in the preprocessing step to identify inconsistencies in data/outliers; thus, a good classifier can be generated for better decision making. Previous studies showed that removing the outlier can improve the classification accuracy. Tallón-Ballesteros and Riquelme utilized outlier detection for a classification model [50]. The authors proposed a statistical outlier detection method based on the interquartile range (lQR) with classes. The results showed that by removing the outliers from the training set, the classification performance of C4.5 was improved. Podgorelec et al. utilized an outlier prediction method to improve classification model performance in medical datasets [51]. The results showed that by removing the identified outliers from the training set, the classification accuracy was improved, especially for the Naïve Bayes classifier. One of the techniques used for outlier detection is DBSCAN [52]. The algorithm works by identifying dense regions, which are determined based on the number of objects close to a given point. Finally, the algorithm identifies points that do not belong to any cluster, which are treated as outliers. DBSCAN has been implemented in different areas and showed significant accuracy by detecting true outliers. Tian et al. proposed an outlier detection method involving soft sensor modeling of time series [53]. They utilized DBSCAN for outlier detection and the proposed outlier detection method demonstrated good performance. Abid et al. proposed outlier detection based on DBSCAN for sensor data in wireless sensor networks [54]. The proposed model successfully separated outliers from normal sensor data. Based on experiments with synthetic datasets, the proposed model showed significant accuracy in detecting outliers, with an accuracy rate of 99%. Existing studies showed that Random Forest can be utilized for fault prediction with high classification accuracy. Furthermore, several studies showed significant results for DBSCAN-based outlier detection with regard to improving the classification accuracy. We propose a hybrid prediction model that consists of DBSCAN-based outlier detection to remove the outlier data, and Random Forest to detect whether the manufacturing process is functioning normally or abnormally. The hybrid prediction model is integrated with a real-time big data processing system, enabling processing of the sensor data from IoT-based sensor device (e.g., temperature, humidity, accelerometer, and gyroscope) and fault prediction in real-time. System Design The real-time monitoring system proposed here was developed to help managers to better monitor the assembly line process in an automotive manufacturing as well as provide early warning when a fault is detected. The proposed system utilizes IoT-based sensors, big data processing, and a hybrid prediction model. The hybrid prediction model consists of clustering-based outlier detection and a machine learning-based classification model. As can be seen in Figure 1a, IoT-based sensors are attached to the desk of a workstation in the assembly line. The IoT-based sensors consist of temperature, humidity, accelerometer, and gyroscope sensors. The IoT-generated sensor data is transmitted wirelessly to a cloud server where the big data processing system is installed. The system allows the system to process large amounts of sensor data quickly before they are stored in the MongoDB database. A clustering-based outlier detection method is utilized to filter out outliers from the sensor data. In addition, a data analytics machine learning-based classification model is applied to predict faults given by the current sensor data during the assembly line process. Finally, the complete history of the sensor data such as the temperature, humidity, accelerometer, and gyroscope data are presented to the manager in real-time via a web-based monitoring system in addition to the fault prediction results. Sensors 2018, 18, x FOR PEER REVIEW 9 of 24 model is applied to predict faults given by the current sensor data during the assembly line process. Finally, the complete history of the sensor data such as the temperature, humidity, accelerometer, and gyroscope data are presented to the manager in real-time via a web-based monitoring system in addition to the fault prediction results. (a) (b) Figure 1. Architecture of the real-time monitoring system in an assembly line process (a) and system design for big data processing (b). The proposed big data processing system utilizes Apache Kafka, Apache Storm, and MongoDB. Apache Kafka is a message queue system with low-latency, high-throughput, and fault tolerance, capable of publishing streams of data. Apache Storm is a real-time parallel data processing system with horizontal scalability, fault tolerance, and guaranteed data processing and can process large volumes of high-velocity streams of data. Figure 1b shows the system design for the big data processing system proposed for real-time monitoring. The sensor data from the IoT-based sensor device is wirelessly transmitted using a python-based program developed to serve as the "producer" for the Kafka server. The "producer" client publishes streams of data to Kafka "topics" distributed across one or more cluster nodes/servers called "brokers". The published streams of data from Kafka are then processed by Storm in parallel and real-time. Outlier detection and classification are implemented inside Storm. The sensor data and the classification results are stored in MongoDB and presented in a web-based monitoring system in real-time. The characteristics of IoT-generated sensor data are as follows: large amount, unstructured format, and continuous generation. Figure 2a shows an example of the data generated by IoT-based sensors in JSON format before being sent to the Kafka server. The sensor data is delivered to Storm where the hybrid prediction model (i.e., outlier detection and fault classification) is implemented. The sensor data and the prediction results are then stored in NoSQL MongoDB. An embedding scheme-based sensor data repository is commonly utilized in NoSQL MongoDB databases to improve performance [67]. We found that the embedding scheme is appropriate for a large sensor data repository, which requires fast read and write performance [33]. Thus, in our study, we The proposed big data processing system utilizes Apache Kafka, Apache Storm, and MongoDB. Apache Kafka is a message queue system with low-latency, high-throughput, and fault tolerance, capable of publishing streams of data. Apache Storm is a real-time parallel data processing system with horizontal scalability, fault tolerance, and guaranteed data processing and can process large volumes of high-velocity streams of data. Figure 1b shows the system design for the big data processing system proposed for real-time monitoring. The sensor data from the IoT-based sensor device is wirelessly transmitted using a python-based program developed to serve as the "producer" for the Kafka server. The "producer" client publishes streams of data to Kafka "topics" distributed across one or more cluster nodes/servers called "brokers". The published streams of data from Kafka are then processed by Storm in parallel and real-time. Outlier detection and classification are implemented inside Storm. The sensor data and the classification results are stored in MongoDB and presented in a web-based monitoring system in real-time. The characteristics of IoT-generated sensor data are as follows: large amount, unstructured format, and continuous generation. Figure 2a shows an example of the data generated by IoT-based sensors in JSON format before being sent to the Kafka server. The sensor data is delivered to Storm where the hybrid prediction model (i.e., outlier detection and fault classification) is implemented. The sensor data and the prediction results are then stored in NoSQL MongoDB. An embedding scheme-based sensor data repository is commonly utilized in NoSQL MongoDB databases to improve performance [67]. We found that the embedding scheme is appropriate for a large sensor data repository, which requires fast read and write performance [33]. Thus, in our study, we utilized an embedding scheme-based sensor data repository. As can be seen in Figure 2b, the sensor document consists of the ID of the IoT device, the recorded time, processed time, sensor data, and prediction results. The sensor data such as temperature, humidity, gyroscope, and accelerometer data are embedded as a subdocument. utilized an embedding scheme-based sensor data repository. As can be seen in Figure 2b, the sensor document consists of the ID of the IoT device, the recorded time, processed time, sensor data, and prediction results. The sensor data such as temperature, humidity, gyroscope, and accelerometer data are embedded as a subdocument. System Implementation In this study, the monitoring system was applied to monitor the assembly line process for producing door-trim at an automotive manufacturing in Korea, as shown in Figure 3. The developed IoT-based sensor consists of a Raspberry Pi [68] as the single main board and Sense-HAT [69] as an add-on sensor board. Raspberry Pi is a small single-board computer with the dimensions of 85.60 mm × 53.98 mm × 17 mm, weighing only 45 g, and is affordable at approximately $25-35 USD. It has USB, LAN, HDMI, audio, and video ports for various input and output operations. In addition, general-purpose input-output (GPIO) connectors enable additional devices, or add-on boards such as sensors, to be connected to the main board [70]. The detailed specifications of the Raspberry Pi board can be seen in Table 1. The Sense-HAT board is an add-on sensor board that measures temperature, humidity, accelerometer, and gyroscope data and is designed as an official add-on board for the Raspberry Pi. The detailed specifications of the Sense-HAT board can be seen in Table 2. The Sense-HAT board is attached to a Raspberry Pi via GPIO 40 pins. The assembled and real-case implementation versions of the IoT-based sensor device can be seen in Figure 3. System Implementation In this study, the monitoring system was applied to monitor the assembly line process for producing door-trim at an automotive manufacturing in Korea, as shown in Figure 3. The developed IoT-based sensor consists of a Raspberry Pi [68] as the single main board and Sense-HAT [69] as an add-on sensor board. Raspberry Pi is a small single-board computer with the dimensions of 85.60 mm × 53.98 mm × 17 mm, weighing only 45 g, and is affordable at approximately $25-35 USD. It has USB, LAN, HDMI, audio, and video ports for various input and output operations. In addition, general-purpose input-output (GPIO) connectors enable additional devices, or add-on boards such as sensors, to be connected to the main board [70]. The detailed specifications of the Raspberry Pi board can be seen in Table 1. The Sense-HAT board is an add-on sensor board that measures temperature, humidity, accelerometer, and gyroscope data and is designed as an official add-on board for the Raspberry Pi. The detailed specifications of the Sense-HAT board can be seen in Table 2. The Sense-HAT board is attached to a Raspberry Pi via GPIO 40 pins. The assembled and real-case implementation versions of the IoT-based sensor device can be seen in Figure 3. In this study, we developed a python-based program as a client using the supplied official application programming interface (API) to gather sensor data from IoT-based sensors [71]. The IoT-based sensors continuously collect temperature, humidity, gyroscope, and accelerometer data, which are transmitted to a cloud server wirelessly. As can be seen in Figure 3, an IoT-based sensor device is attached to the desk of a workstation panel along the assembly line process. The IoT-based sensor senses the environmental conditions and sends the sensor data to a cloud server every 5 s. The sensor data are processed by the big data processing system and analyzed further in real-time. Finally, the historical sensor data are saved in MongoDB and presented on a web-based monitoring system in real-time. In this study, we developed a python-based program as a client using the supplied official application programming interface (API) to gather sensor data from IoT-based sensors [71]. The IoT-based sensors continuously collect temperature, humidity, gyroscope, and accelerometer data, which are transmitted to a cloud server wirelessly. As can be seen in Figure 3, an IoT-based sensor device is attached to the desk of a workstation panel along the assembly line process. The IoT-based sensor senses the environmental conditions and sends the sensor data to a cloud server every 5 s. The sensor data are processed by the big data processing system and analyzed further in real-time. Finally, the historical sensor data are saved in MongoDB and presented on a web-based monitoring system in real-time. Hybrid Prediction Model for Fault Detection In this study, the hybrid prediction model is utilized to predict whether the process is functioning normally or abnormally. Figure 4 shows the process of detecting normal or abnormal events during the manufacturing process. The hybrid prediction model utilizes an outlier detection based on DBSCAN to detect and remove outliers from the sensor data and a Random Forest-based classification model to predict normal and abnormal events. Finally, the performance is evaluated by comparing the hybrid prediction model with other classification models. Hybrid Prediction Model for Fault Detection In this study, the hybrid prediction model is utilized to predict whether the process is functioning normally or abnormally. Figure 4 shows the process of detecting normal or abnormal events during the manufacturing process. The hybrid prediction model utilizes an outlier detection based on DBSCAN to detect and remove outliers from the sensor data and a Random Forest-based classification model to predict normal and abnormal events. Finally, the performance is evaluated by comparing the hybrid prediction model with other classification models. For the performance evaluation of various prediction models, the dataset was collected from experiments in a lab in which the IoT-based sensor was installed. The collected dataset consisted of 342 instances, which were classified as normal or abnormal events during the manufacturing process. The dataset contained eight features: (1) temperature (°C), (2) humidity (% relative humidity/rh), (3) the X value of the accelerometer, (4) the Y value of the accelerometer, (5) the Z value of the accelerometer, (6) the X value of the gyroscope, (7) the Y value of the gyroscope, and (8) the Z value of the gyroscope. The dataset consisted of 102 data points labeled as "yes" and 240 labeled as "no". A "yes" class indicates an abnormal event occurred while a "no" class means an abnormal event did not occur during the manufacturing process (normal). In addition, the collected training dataset (342 instances) was labeled based on the possible combination of fault events during assembly line process in automotive manufacturing. The machine learning methods are expected to learn and generate the robust model/classifier from collected dataset. Once the model/classifier is generated and installed into monitoring system, the prediction result from real-time IoT-based sensor data can be presented. Once the dataset was collected, data preprocessing was performed by removing inappropriate, inconsistent, and missing-value data. Table 3 shows the dataset distribution for the mean and standard deviation of each class. Furthermore, in order to analyze the significance of the features, the Information Gain (IG) technique was applied [72]. Weka version 3.6.15 software was utilized to evaluate the significance of the features with IG [73]. The dataset attributes and their IG scores are presented in Table 4. The results show that temperature is the greatest factor that affects abnormal events during the manufacturing process. For the performance evaluation of various prediction models, the dataset was collected from experiments in a lab in which the IoT-based sensor was installed. The collected dataset consisted of 342 instances, which were classified as normal or abnormal events during the manufacturing process. The dataset contained eight features: (1) temperature ( • C), (2) humidity (% relative humidity/rh), (3) the X value of the accelerometer, (4) the Y value of the accelerometer, (5) the Z value of the accelerometer, (6) the X value of the gyroscope, (7) the Y value of the gyroscope, and (8) the Z value of the gyroscope. The dataset consisted of 102 data points labeled as "yes" and 240 labeled as "no". A "yes" class indicates an abnormal event occurred while a "no" class means an abnormal event did not occur during the manufacturing process (normal). In addition, the collected training dataset (342 instances) was labeled based on the possible combination of fault events during assembly line process in automotive manufacturing. The machine learning methods are expected to learn and generate the robust model/classifier from collected dataset. Once the model/classifier is generated and installed into monitoring system, the prediction result from real-time IoT-based sensor data can be presented. Once the dataset was collected, data preprocessing was performed by removing inappropriate, inconsistent, and missing-value data. Table 3 shows the dataset distribution for the mean and standard deviation of each class. Furthermore, in order to analyze the significance of the features, the Information Gain (IG) technique was applied [72]. Weka version 3.6.15 software was utilized to evaluate the significance of the features with IG [73]. The dataset attributes and their IG scores are presented in Table 4. The results show that temperature is the greatest factor that affects abnormal events during the manufacturing process. DBSCAN-based outlier detection was utilized in our study to filter out outlier data from the dataset [52]. Dense regions were created by finding the objects close to a given point. Outliers were defined as the points located outside dense regions. Epsilon (eps) and minimum points (MinPts) are two important parameters considered in DBSCAN. eps defines the radius distance of the neighborhood around a point x ( -neighborhood of x) and MinPts defines the minimum number of neighbor points within the defined radius distance of eps. For dataset D, which is marked as unvisited, DBSCAN works as follows: • For each unvisited point x i in D, find the -neighborhood of x i that includes at least MinPts points. Then x i is labeled as visited. • For point x i , which is not assigned to a specific cluster, create a new cluster C. Add the points in the -neighborhood of x i to a candidate set N. Add any points in N (that do not belong to any cluster) to C. • For each point p in N, find the -neighborhood of p that includes at least MinPts points. Those points in the -neighborhood of p are then included in the candidate set N and assigned to cluster C. Finally, p is labeled as visited. • Iterate the process for the remaining points in N and the unvisited points in the dataset D. • The points that do not belong to any cluster are labeled as outliers. Due to the imperfect sensing device and network connection problems, some of the data collected by the sensor may be noise caused by outlier data. Outlier detection based on DBSCAN was applied to our dataset. The optimal value of MinPts and eps should be defined first in order to perform DBSCAN-based outlier detection. If the value of eps is too small, more clusters will be created, and normal data could be classified as outliers. However, if it is too big, less clusters will be generated, and true outliers could be classified as normal data. Through the different setup of the experiments, the optimal parameters for MinPts and eps were discovered, they are 5 and 7. Figure 5 shows the results of DBSCAN implementation for the dataset in two-dimensional graphs. DBSCAN performed clustering by grouping the data into three clusters, presented as clusters 1, 2, and 3. The outliers were unclustered data and were presented as cluster 0. The description of dataset, optimal parameters, and outlier data are presented in Table 5. Finally, the outlier data were removed from the dataset, and the remaining data were used for further analysis. Random Forest is a popular classification method for solving real-world classification problems [66,[74][75][76]. The Random Forest algorithm is constructed by combining multiple decision trees for more accurate and stable prediction [77]. Every tree inside a Random Forest is independently constructed by selecting a random subset of features and bootstrap sampling of the dataset. Next, the tree is grown to the largest possible level. Each decision tree model inside the Random Forest will generate a prediction output and a majority vote is applied to obtain the final prediction output. Majority vote is a well-known method to obtain a better final prediction output [77]. Previous studies have utilized Random Forest because of its robustness when dealing with numerical data and solving real-world problems [74][75][76]. Recently, Random Forest was utilized for predicting the crash of stopping maneuvering [76]. The results showed that Random Forest successfully detected the crash of stopping maneuvering and forecast the safety properties of the ship before production. In our study, DBSCAN-based outlier detection was utilized to remove outlier data from the dataset and Random Forest was utilized to learn from the training set. Finally, the results of prediction were compared with the testing set to determine the model accuracy. experiments, the optimal parameters for MinPts and eps were discovered, they are 5 and 7. Figure 5 shows the results of DBSCAN implementation for the dataset in two-dimensional graphs. DBSCAN performed clustering by grouping the data into three clusters, presented as clusters 1, 2, and 3. The outliers were unclustered data and were presented as cluster 0. The description of dataset, optimal parameters, and outlier data are presented in Table 5. Finally, the outlier data were removed from the dataset, and the remaining data were used for further analysis. Based on a confusion matrix [78], the prediction output can have four possible outcomes, as can be seen in Table 6. True positive (TP) and true negative (TN) results are defined as the number of correctly classified points. False positive (FP) and false negative (FN) results are defined as the number of points incorrectly classified as "yes" (positive) when they are actually "no" (negative) and incorrectly classified as no (negative) when they are actually yes (positive), respectively. In our dataset, abnormal events during the manufacturing process were defined as "Yes" and normal events were defined as "No". For training and testing the dataset, 10-fold cross-validation was applied for all classification models. The final performance measure was obtained by averaging the test performance for all folds. Weka Software 3.6.15 was utilized to run the classification models for the dataset [73]. Table 7 shows the measured performance metrics for the classification model based on precision, recall/sensitivity, and accuracy. Table 6. Confusion matrix of a classifier. Classified as "Yes" Classified as "No" Actual "Yes" TP FN Actual "No" FP TN Table 7. Performance metrics for the classification model. Real-Time Monitoring System Data visualization was developed by utilizing JavaScript framework as a monitoring system to present sensor data in real-time. The manager could monitor the status of assembly line process as well as receive the early warning once the abnormal event (fault) is detected in real-time through the proposed system. The IoT-based sensor devices sent the sensor data to Apache Kafka, then Apache Storm will process the data as well as sent the sensor data and its fault prediction results directly to the monitoring system in real-time, and finally the sensor data and its prediction result are stored into MongoDB. As can be seen in Figure 6, the real-time monitoring system can be easily accessed via a web-browser on a personal computer. The proposed system presents the sensor data such as temperature, humidity, accelerometer, and gyroscope data in real-time. The device ID (IoT-based sensor device) and recorded time was collected and presented for every record. In addition, the hybrid prediction model was used to predict the fault and present the result into real-time monitoring system. The proposed system has been implemented and tested in one of automotive manufacturing in Korea from 1 August 2017 to 31 March 2018. Four IoT-based sensor devices were installed in the manufacturing assembly line and transmitted the sensor data to the remote server every 5 s. During this testing period, around 19 million records (with approximate size is 3 gigabytes) has been collected. Our proposed real-time monitoring system consists of three parts: the IoT-based sensor, the big data processing platform and hybrid prediction model. The performance evaluation are presented for each part in Section 4.2, Section 4.3 and Section 4.4, respectively. Performance of the IoT-Based Sensor An IoT-based sensor consists of a sensor device and a client program to retrieve sensor data and send them to a cloud server. It is important to analyze the IoT-based sensor performance under various conditions. Performance metrics such as network delay and CPU and memory usage were utilized in this study. Alazzawi and Elkateeb proposed network delay as a metric to evaluate the sensor device performance [79], while Morón et al. utilized CPU usage as a metric to evaluate IoT device capabilities in different scenarios [80]. In our study, network delay was defined as the average time between sending sensor data from by the source (sensor device) and successfully receiving the data at the destination (MongoDB). The second performance metric was the average CPU and memory usage of the client program under various scenarios. In this study, the client program was a python-based program running on an IoT-based sensor device that collected sensor data such as temperature, humidity, gyroscope, and accelerometer data. An IoT-based sensor with Linux Raspbian OS Jessie; 1 GB RAM was used for the experiment. Communication between the IoT-based sensor and cloud server was implemented via Wi-Fi. Figure 7a shows the network delay for different amounts of sensor data. The results show that the network delay increases as the amount of sensor data sent by the sensor device increases. It takes approximately 50 s for the IoT-based sensor to send 1000 sensor data points at the same time. Performance of the IoT-Based Sensor An IoT-based sensor consists of a sensor device and a client program to retrieve sensor data and send them to a cloud server. It is important to analyze the IoT-based sensor performance under various conditions. Performance metrics such as network delay and CPU and memory usage were utilized in this study. Alazzawi and Elkateeb proposed network delay as a metric to evaluate the sensor device performance [79], while Morón et al. utilized CPU usage as a metric to evaluate IoT device capabilities in different scenarios [80]. In our study, network delay was defined as the average time between sending sensor data from by the source (sensor device) and successfully receiving the data at the destination (MongoDB). The second performance metric was the average CPU and memory usage of the client program under various scenarios. In this study, the client program was a python-based program running on an IoT-based sensor device that collected sensor data such as temperature, humidity, gyroscope, and accelerometer data. An IoT-based sensor with Linux Raspbian OS Jessie; 1 GB RAM was used for the experiment. Communication between the IoT-based sensor and cloud server was implemented via Wi-Fi. Figure 7a shows the network delay for different amounts of sensor data. The results show that the network delay increases as the amount of sensor data sent by the sensor device increases. It takes approximately 50 s for the IoT-based sensor to send 1000 sensor data points at the same time. However, in a real-case implementation, it takes less than 0.02 s to send the sensor data, as we only set one sensor data point (temperature, humidity, gyroscope, and accelerometer data) to be sent every 5 s. In addition, Figure 7b shows the CPU and memory usage of the client program. Four different reading period scenarios were evaluated, in which the client program was reading and sending sensor data to the cloud server every 5, 10, 30, and 60 s. The results showed that the reading period has a very small effect on CPU or memory usage. Regarding the computational cost of the client program, it should be noted that the program used less than 3% CPU and 18 MB for all reading periods. However, in a real-case implementation, it takes less than 0.02 s to send the sensor data, as we only set one sensor data point (temperature, humidity, gyroscope, and accelerometer data) to be sent every 5 s. In addition, Figure 7b shows the CPU and memory usage of the client program. Four different reading period scenarios were evaluated, in which the client program was reading and sending sensor data to the cloud server every 5, 10, 30, and 60 s. The results showed that the reading period has a very small effect on CPU or memory usage. Regarding the computational cost of the client program, it should be noted that the program used less than 3% CPU and 18 MB for all reading periods. (a) (b) Figure 7. The IoT-based sensor system's (a) network delay, and (b) CPU and memory usage. The Performance of Big Data Processing It is important to analyze the performance of big data processing under various conditions. Performance metrics such as system latency, throughput, and concurrency were utilized in this study. Pereira et al. utilized system latency and throughput to evaluate the performance of big data technology under different operations [81], while Van der Veen et al. used concurrency to evaluate big data technology under multiple clients [82]. In our study, system latency is defined as the time needed by the proposed system to handle, process, and store the sensor data into database. Throughput is defined as total number of sensors data processed per second. The last metric is concurrency which is defined as the number of clients accessed simultaneously to the system. The experiments was conducted with different numbers of servers and the response time was collected for analysis. The Java program was developed as a simulator to generate sensor data and send the data to the big data processing servers. The server was installed with Apache Kafka, Apache Storm, and MongoDB. The threads was used by Java program to simulate multiple clients. The detailed specifications of client and server computer can be seen in Table 8. In addition, the approximate size of each simulated data is around 211 bytes which consists of the device ID, the date and time when the data is generated and the value of sensor data (temperature, humidity, accelerometer, and gyroscope). Figure 8a shows that as the amount of sensor data sent to the cloud server increased, the response time also increased. The number of clients also affected the response time, since more time was required for the proposed system to process and store sensor data sent by a larger number of clients simultaneously. However, taking advantage of scalability support by adding more servers can help achieve lower response time compared to a single server as shown in Figure 8b. Figure 8c,d show the system throughput with different numbers of clients. Better performance could be achieved by increasing the number of servers. Furthermore, Figure 8e,f compare the system latency and database size of MongoDB and CouchDB. In this test, we used a single client and sent different amounts of sensor data to the cloud server at the same time. The Java Program was implemented on the client-side to send the sensor data to the cloud server. MongoDB performed better than CouchDB when the amount of sensor data increased. In addition, MongoDB occupied a lower database size than CouchDB did. The Performance of Big Data Processing It is important to analyze the performance of big data processing under various conditions. Performance metrics such as system latency, throughput, and concurrency were utilized in this study. Pereira et al. utilized system latency and throughput to evaluate the performance of big data technology under different operations [81], while Van der Veen et al. used concurrency to evaluate big data technology under multiple clients [82]. In our study, system latency is defined as the time needed by the proposed system to handle, process, and store the sensor data into database. Throughput is defined as total number of sensors data processed per second. The last metric is concurrency which is defined as the number of clients accessed simultaneously to the system. The experiments was conducted with different numbers of servers and the response time was collected for analysis. The Java program was developed as a simulator to generate sensor data and send the data to the big data processing servers. The server was installed with Apache Kafka, Apache Storm, and MongoDB. The threads was used by Java program to simulate multiple clients. The detailed specifications of client and server computer can be seen in Table 8. In addition, the approximate size of each simulated data is around 211 bytes which consists of the device ID, the date and time when the data is generated and the value of sensor data (temperature, humidity, accelerometer, and gyroscope). Figure 8a shows that as the amount of sensor data sent to the cloud server increased, the response time also increased. The number of clients also affected the response time, since more time was required for the proposed system to process and store sensor data sent by a larger number of clients simultaneously. However, taking advantage of scalability support by adding more servers can help achieve lower response time compared to a single server as shown in Figure 8b. Figure 8c,d show the system throughput with different numbers of clients. Better performance could be achieved by increasing the number of servers. Furthermore, Figure 8e,f compare the system latency and database size of MongoDB and CouchDB. In this test, we used a single client and sent different amounts of sensor data to the cloud server at the same time. The Java Program was implemented on the client-side to send the sensor data to the cloud server. MongoDB performed better than CouchDB when the amount of sensor data increased. In addition, MongoDB occupied a lower database size than CouchDB did. Hybrid Prediction Model for Fault Detection During dataset generation, the big data processing system receives the sensor data from the IoT-based sensor device and stores the data in NoSQL MongoDB. The IoT-based sensor collects data from different types of operation, including normal and abnormal events. The dataset is then labeled by expert users based on the process status (either normal or abnormal) during the period when the sensor data were collected. Next, the dataset is analyzed using the hybrid prediction model to predict the fault status. The performance comparison results for several classification models are presented in Table 9. Several conventional classification models such as Naïve Bayes (NB), Logistic Regression (LR), Multilayer Perceptron (MLP), and Random Forest (RF) were compared with the hybrid prediction model to identify and predict abnormal events. The proposed model achieved the highest accuracy (100%) compared to other classification models. There was slight improvement in model accuracy after the implementation of DBSCAN-based outlier detection. Integrating DBSCAN-based outlier detection with the Random Forest model increased the accuracy by as much as 1.462% compared to conventional Random Forest. Furthermore, the accuracy improvement has been found in other conventional classification models after applying DBSCAN for outlier detection as much as 3.173%, 0.567%, and 2.026% for Naïve Bayes, Logistic Regression, and Multilayer Perceptron, respectively. The proposed model was implemented in Apache Storm where the streams of data from Kafka can be processed and predicted in parallel and real-time. Figure 6 shows the results of implementation where real-time prediction is performed by Apache Storm to identify whether the process is functioning normally or abnormally given the input data from the IoT-based sensor (e.g., temperature, humidity, accelerometer, and gyroscope). The results of the study are expected to help management prevent unexpected losses caused by faults at an early stage and improve decision-making during the manufacturing process. Managerial Implications In this study, the proposed system consists of three parts: the IoT-based sensor, big data processing, and machine learning model. First, the IoT-based sensor device developed in this study is based on Raspberry Pi which is small-size, low-cost, and powerful single-board computer device. Previous studies have shown significant advantages of utilizing Raspberry Pi such as for controlling and monitoring IoT system [83], estimating the roll angle of a vehicle using embedded neural network in real-time [84], hosting and serving the user interface of eHealth care system [59], and monitoring the temperature of lava lake using near infrared thermal camera [85]. Therefore, the proposed IoT-based sensor device developed in this study could be applied to monitor the manufacturing process in real-time. Second, since the number of IoT devices increased, it is necessary to develop new big data processing to effectively handle, process, and store the data without experiencing detectable performance loss. Previous studies revealed that by implementing open source software (OSS), the organizations can achieve some economic gains in terms of software development productivity, product quality, as well as lower cost (i.e., license costs) and availability of external support [86,87]. In our study, the developed big data processing platform is based on the OSS that is cost-effective for implementation and integration. Third, machine learning has been used in various processes for monitoring systems in manufacturing and predictive maintenance in different industries [88][89][90][91][92]. Machine learning has powerful tools for continuous quality improvement in a large and complex process such as semiconductor manufacturing [89,90,92]. In our study, the machine learning model is used to detect the fault (abnormal event) during assembly line process in real-time. Thus, it is expected to support the management in improving the decision-making and preventing the unexpected loss caused by faults at an early stage during manufacturing process. Finally, the overall results of the study can be used as a guideline for the industrial practitioner in adopting the IoT, big data, and machine learning for their manufacturing process. Previous scholars and practitioners have considered several aspects of big data. Big data is often described in terms of 4 V's, they are volume (the size of data), variety (different type of data), velocity (speed of data generation), and veracity (reliability of data) [61]. However, some scholars are more focused on one or more aspects of the big data concept. Davenport et al. focused more on the variety aspect of data sources [93], while some other authors emphasized the storage (volume) and analysis parts when it comes to dealing with big data [94,95]. The big data processing that efficiently can handle the fast incoming (velocity) and huge amount (volume) of sensor data has been developed in our study. Finally, the integration of an IoT-based sensor, big data processing, and machine learning model can be utilized to effectively monitor the manufacturing process as well as obtain early warning notification when an abnormal event is detected in real-time. Conclusions In this study, we developed a real-time monitoring system that utilizes IoT-based sensors, big data processing, and a hybrid prediction model. The proposed model is expected to help managers monitor the status of the assembly line process and to identify faults in the process, thus unexpected losses caused by faults can be prevented. Through this study, we showed that integrating IoT-based sensors with a big data processing system is effective for processing and analyzing large amounts of sensor data in real-time. The big data processing system developed in this study utilizes Apache Kafka, Apache Storm, and NoSQL MongoDB. The experimental results showed that the system is scalable and can process a large amount of continuous sensor data more efficiently than traditional models. Furthermore, the performance of the IoT-based sensor was analyzed with various metrics such as the network delay, CPU, and memory usage. For all experimental scenarios, the IoT-based sensor provided an efficient solution as it successfully collected and transmitted the data within an acceptable time with low computational cost. Fault detection is an important issue in the manufacturing process as it can identify whether the process is functioning normally or abnormally. We propose a hybrid prediction model that consists of DBSCAN-based outlier detection and Random Forest classification. DBSCAN was used to separate outliers from normal sensor data, while Random Forest was utilized to predict faults-given the sensor data as input. The results showed that the proposed hybrid prediction model is effective with high accuracy compared to the other models tested. The results of the study are expected to support management and improve decision-making during manufacturing, helping prevent unexpected losses caused by faults. Security is a big issue when more IoT devices are adopted, implemented, and connected. Therefore, the security of IoT devices and platforms should be considered in a future study. Furthermore, a variety of abnormal conditions during the manufacturing process should be further identified and collected so the proposed hybrid prediction model can be utilized to learn from a complex dataset in the near future.
14,130.2
2018-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Rethinking post-Covid-19 school design in Brazil: adaptation strategies for public schools PEE-12 FNDE In 2020, the World Health Organization (WHO) declared the disease COVID-19, whose causative virus is SARS-CoV-2, a pandemic. An important measure was the closure of schools in several countries to try to reduce the contagion levels, so that students were not exposed to risk, nor their families. The question that arises within this context is: In school architecture, what are the appropriate design methods to deal with challenges during and after a pandemic? In this scope, the article aimed to propose an adaptive design scenario in the post-pandemic moment for a standard school in Brazil. The methodology was built through a literature review and multidisciplinary research, to later present strategies based on the recommendations of competent bodies and studies focused on the school architecture, design patterns for 21st-century schools, technology and security. The focus was on design challenges in the education field in the post-pandemic moment, and on the adaptation of the school built spaces for the return of activities. The results can help the school community and public agencies in making decisions to face this challenge, recreating safer, user-centered schools. INTRODUCTION In 2020, the World Health Organization (WHO) declared the disease COVID-19, caused by the SARS-CoV-2 virus, a pandemic. In order to limit the risk of contagion, an important measure was the closing of schools in several countries. This fact precipitated an educational crisis, further deepening social inequalities and weaknesses (UNESCO, 2020). With the pandemic, Digital Information and Communication Technologies (TDICs) emerged as an opportunity for students to learn outside the classroom. However, a considerable portion has no structure or access to a quality internet network. According to Van Lancker & Parolin (2020), in Europe, 5% of children do not have a good place to do their homework, and 6.9% do not have access to the internet. As social distancing measures are still imprecise, it is urgent to identify how countries can safely return students to education and parents to work (Viner et al., 2020). According to the authors, policymakers and researchers should look for less disturbing forms of social detachment in schools than the total closure of places, and can contribute substantially to maintaining control of the pandemic (Viner et al., 2020). It is necessary to reflect about the quality of the traditional educational system, considering the critical role of the physical school environment in the quality of learning, in the wellbeing and social construction of the individual at present, as well as the flexibility and wholesomeness of educational spaces. In the design process, there is a fertile field for practices and strategies that directly impact the construction of quality schools. Thus, the question that arises is: in the current pandemic scenario, does the architectural design of schools offer a resilient and safe dimension for adaptations to the post-pandemic moment? How can architects and designers face the challenges of the new reality, and how can they adapt and reconfigure the school environment? Thus, the objective of this article is to propose an adaptive design scenario for the the postpandemic moment of the standard school called Projeto Espaço Educativo 12 Salas, from the National Fund for Education Development, in Brazil. The methodology was built through a literature review based on the systematization and analysis of articles related to back-to-school security, school architecture, learning spaces suitable for the 21st Century, and technology. Figure 1 illustrates, in general, how the methodological process of data research was carried out for re-design of the case study. In general, this research aims to foster reflection on the future of the school environment in face of the pandemic and other emerging situations, and to suggest a way of rethinking educational spaces, so that they are accessible and welcoming. STRATEGIES FOR REOPENING SCHOOLS The course of thought for now, despite many uncertainties in decision-making, is to reflect on how to proceed when public authorities allow schools to reopen for face-to-face education. In this process, it is essential to consider examples of success in other countries for possible implementation at national level, in addition to a constant reassessment of the benefits and limitations of this transition. The important thing, at this stage, is to ensure safe conditions for the health of the occupants. Sheikh et al. (2020) summarized four approaches that are being used internationally to enable a safe return to schools: 1. Keep schools closed until a vaccine can be administered to achieve immunity, or treatment is found. 2. Reopen schools completely when the sufficient number of reproduction (Rt) is well below 1. Despite the benefits of resuming face-to-face education, there is a risk of triggering additional peaks of infection. 3. Partially reopen schools, so that there are fewer students in the school. 4. The hybrid approach, in which face-to-face classes are transmitted live to those who need to be protected because of chronic illnesses or have the ability to study at home. However, this depends on high-speed Internet access and appropriate devices. In Denmark, for example, children are taught outdoors and maintain a physical distance of 2 meters. In this case, a rearrangement of tables has been proposed in an attempt to reduce the transmission of droplets and contact (Sheikh et al., 2020). For Fantini et al. (2020), safety measures for the reopening of schools may include the creation of small set groups of children, in order to maintain social distance, taking into account the available spaces and considering the implementation of different shifts to attend schools. Other strategies proposed by Fantini et al. (2020) are: avoid sharing materials, relocate rooms and common areas, ensure frequent access to hand washing, ensure ventilation and sanitize environments. In summary, strategies that include returning to face-to-face classes at schools must be conducted according to each local, social and environmental context, with due care and strict control and inspection. Whatever strategy is adopted, carefully planned assessments are crucial to help develop a robust evidence base and guide the decision for this and future pandemics (Sheikh et al., 2020). Campos (2020), in a research on resilience, education and architecture, highlights that by interacting with human emotion, the architectural composition of a school influences the mind of the student, and also points out that several studies have confirmed that environmental characteristics act on brain processes and learning. DESIGN FOR 21 st CENTURY SCHOOLS Over the past decades, schools were constituted of classrooms as well-known physical and social structures, where the planned pedagogies were based on teacher-focused instruction and associated spatial arrangements. However, in recent years, there has been a growing appetite for allowing a wider range of pedagogical approaches than is possible in traditional classrooms (Young et al., 2019). Demirel (2009), stated that in the new teaching period of the 21st Century, the advance of skill level, self-education, self-development and the full use of individual skills would be in the foreground. Campos (2020), further states that it is necessary to take into account the "educational" value of architecture in order to reinforce the proactive dimension of resilience. Schools with seriously considered architecture are safer schools, provide friendly environments and motivate students to learn. Thinking of establishing essential characteristics for a school project suitable for the 21st Century, Nair et al. (2013), delimited 29 project patterns, possible to work with according to each local context (Table 1). Based on this evidence, it is relevant to mention that with adequate and stimulating educational spaces, learning can be facilitated. Therefore, in the present-day field of education, it is essential to discuss how the school environment of the future can overcome barriers evidenced by the pandemic. It is necessary to transform the school structure into functional, stimulating, accessible and democratic spaces that provide well-being and are adapted to the needs of users in the 21st Century. TECHNOLOGY IN SCHOOLS According to Mayes et al. (2015), exposure to technologies generally meets student's expectations, improves productivity, contributes to successful careers, and complements lifelong learning skills. However, Barrett et al. (2019) point out that, despite the moderate use of computer in the classroom to assist in learning, there are some adverse effects of the intense use of this type of equipment, as it modifies the engagement in the teacher-student relationship. This requires efforts, improvement and discussion among those involved in this process. Globally, changes in education were needed in 2020, so that many children could continue their studies online, during the COVID-19 pandemic, with the help of technology in their homes. However, it was not an enriching experience for all students, since a large number of families, especially low-income ones, do not have adequate equipment or quality Internet access. In addition to the difficulties faced by students, the COVID-19 pandemic has highlighted the challenge of teachers in adapting to e-learning. Cani et al. (2020) pointed out that, for many teachers, the art of reinventing themselves, restructuring their didactics to the new practices has not been an easy task. However, they emphasize that one of the positive aspects is that schools will never be the same since the concept of educational space within walls has been replaced by the idea of flexible and technological spaces (Cani et al., 2020). For Sarmento et al. (2020), blended learning, with broad access to information, brings a profound change in the roles played by teachers and students. The knowledge domain relations in the classroom are modified, and in the same way, the configuration of learning environments needs to be reviewed and updated (Sarmento et al., 2020). In order to guarantee learning environments aligned with blended learning, Sarmento et al. (2020) presented some technical specifications for public schools, attending to studentcentred education. Among them, some specifications of constructive elements and systems are highlighted in Table 2. Color Prioritize students' visual comfort, avoid eye fatigue. Living spaces Shaded spaces for relaxation and contemplation of nature, with comfortable and varied furniture. Bathrooms Close to classrooms and with domestic configuration; If there are internal cabins, doors and rooms must be from floor to ceiling; Partition between female and male bathrooms, and minimum sizing of 1 toilet and one washbasin for every 20 students. Energy and lighting Arrangement of sockets allowing the connection of cell phone and laptop chargers to student desks; Installation of a baseboard that allows electrical wiring to pass and open new energy points; Lighting that alternates with daylight, without glare; Specific lighting for blackboard; On work desks, provide lamps, use direct light, and lamps with good colour reproduction; Control system for lighting levels; Automatic activation of luminaires in rows (parallel to the windows) to be activated, as natural light decreases its intensity; Locate switches close to workstations, as well as close to doors; General key that controls the total lighting of the room. Internet Technological resources control room for technical support to equipment and management-high-capacity Wi-Fi. Learning Management and Sharing Wi-Fi connection between the physical environment and online classroom environment, for sharing educational content. In addition to these factors, Sarmento et al. (2020) mention others that involve environmental comfort, fine furniture and equipment for educational and personal use. Based on this, it can be said that the world, increasingly globalized, will create new opportunities for designers and architects to design and adapt to the necessary technological changes in the field of education, as in several other areas of knowledge. THE STANDARD PROJECT FNDE EDUCATIONAL SPACE PROGRAM -PEE-12 IN BRAZIL: STRATEGIES FOR ADAPTING TO THE POST-PANDEMIC MOMENT Projeto Espaço Educativo 12 Salas -PEE-12 is a standard public school architectural project for elementary school, and is part of the Plan of Articulated Actions -PAR, a policy of the Brazilian government to support physical structuring, together with the National Development Fund of Education. The program seeks to encourage the school's physical infrastructure, through funds to build schools or purchase equipment. This architectural typology aims to serve 780 students in elementary school, in two shifts (FNDE, 2020). There are currently 216 schools completed in this architectural typology in the country since 2013, in addition to others in progress, paralyzed or not yet started (BRAZIL, 2020). PEE-12 presents, in the architectural party, independent blocks that are interconnected by covered external circulations, obeying the proposed sectorization. The standard school has two technological blocks, one with laboratories and student council, and the other with a library, teachers' room and auditorium; an administrative block; three educational blocks with traditional classrooms and bathrooms; a block with a covered patio and a kitchen; an uncovered patio, functioning as a living square; an indoor court and changing rooms. Figure 2 illustrates the school overview. Figure 2. Layout of each block of the school's standard project (FNDE, 2020, adapted by authors). As for the classrooms, all have the same architectural typology, with tall windows facing the internal circulation of the school, and larger windows facing the outside, in addition to two doors, also in the two walls of more significant extension, allowing cross ventilation ( Figure 3). The standard typology of these schools is distributed in throughout most of Brazil, often disregarding the local, environmental and social context. In the current situation, with faceto-face classes paralyzed, it is essential to provide strategies for their return, with the necessary care to prevent the spread of the coronavirus and to promote safer school spaces. Below, we systematize some short-term strategies for a safer adaptive architectural design in order to reinvent the school space in these primary education institutions. Strategies for Adapting School Architectural Design Considering the literature review presented in this article, some adaptation strategies have been proposed for resuming face-to-face classes in times of pandemic caused by Covid-19 for standard public education schools in Brazil, PEE-12 -FNDE. Intervention possibilities have been proposed within the limitations imposed by standard projects. It would not be feasible in this study to propose long term changes, because the objective here is to bring proposals for an urgent situation, returning to classes in face of the pandemic caused by the coronavirus. These strategies are systematized in Table 3, based on some design patterns for 21st-century schools according to Nair et al. (2013), in Byers et al. (2018) research on innovative learning environments, in Sarmento et al. research (2020), with strategies aligned with blended learning, in the hybrid approach of Sheikh et al. (2020), and the strategies of Fantini et al. (2020). As for the design parameters of Nair et al (2013), opportunities for adaptation, regarding the recommendations present in these parameters. Also, design patterns were selected that meet pandemic adaptation strategies. Item Technical specifications Class Format  In-person and online, giving priority to hybrid teaching, and familiarizing students with this new teaching modality.  Access to content remotely through a digital platform and monitoring the student's progress.  Face-to-face classes should be broadcast live for those who have the opportunity to study at home, or for those who have a disease and are at risk.  Divide students into different shifts to reduce the number of people in the classroom.  Take advantage of open spaces for outdoor classes, in nature, within the possibilities of each region. Figure 4 illustrates some environments that can be used for these classes. Classrooms, Learning Studios, Advisories and Small Learning Communities  Independent study, during the pandemic.  Small groups of students, with reduced class density.  Learning based on mobile technology (laptops, smartphones).  Distance of 2 meters between chairs, mainly in the post-pandemic moment, to avoid contact, considering the available physical space, as shown in Figure 5.  The furniture must be accessible and flexible.  Classrooms with polycentric layout, combining the use of digital technologies, considering the available physical space. Sanitation  Guide students to avoid sharing materials; Information boards.  Ensure frequent access to hand washing, and if possible, relocate washbasins close to each classroom.  Provide hand sanitizer for cleaning in all classrooms, laboratories, library.  Ensure periodic cleaning in all school environments, including furniture.  Coatings that are difficult to clean and maintain must be changed.  Place sanitary mats for cleaning shoes when accessing the school.  Use of masks to prevent droplet transmission, until health agencies and public authorities consider it relevant. Home Base and Individual Storage  Ensure a larger and individualized space for each student to store their material, such as individual lockers, with a key. Casual Eating Areas  Provide snacks and meals in smaller "cafes", for access to students during school hours. This parameter suggests more intimate locations than the cafeteria, and external areas can be used to place furniture, suitable for external conditions and comfortable. Ventilation  Classrooms must allow cross ventilation for cleaning, by opening the windows on the sidewalls of the blackboard. Indoor/Outdoor Connection  Allow more generous access to external areas and contact with nature.  Views from inside the rooms to green areas would be impressive. Direct physical connections. Dispersed Technology  High-speed internet access in all school environments, including hallways, yard and common areas, for students and teachers to use for educational content. The Wi-Fi network could even be accessible to students who do not have internet at home, in case of future pandemics, to access the school's educational content. Furtinure: soft seating  Provide comfortable and upholstered chairs in easy to clean material.  Enable diversity of furniture, with colours that avoid fatigue and stimulate concentration. Cave Space  Adapt individual spaces for reflection and study, with adequate furniture. Spaces outside the school in the middle of nature can be used. Equipment  Devices such as laptops, computers or tablets available to students.  Installation of projectors and digital whiteboard.  Blackouts or blinds on the windows, for activities that involve the use of a projector, avoiding reflection or glare.  Provide suitable quality printers and computers. Electrical installations  Installation of a more significant number of sockets, close to students' desks, the teacher's desk, and doors, allowing the charging of laptops, tablets or smartphones. Teacher training  Create public policies that offer training and courses to teachers, especially concerning e-learning and digital educational tools. Technical support team  Ensure adequate technical support for technological equipment and Wi-Fi network. Figure 4 shows open spaces present in standard PEE-12 schools that can be used for outdoor classes, depending on the local context, climate and conditions that allow this teaching methodology. Figure 6 shows design studies for "cave space", using wood, vegetation, good lighting, available technology and comfortable furniture for study, and casual eating areas, with tables, chairs and sofas for students to use, proposing an environment that functions as an "outdoor cafe" at school. Adapting to the new post-pandemic reality is a significant challenge in all countries, especially for low-income populations, since many of the strategies mentioned involve economic factors, in addition to cultural and social aspects rooted in society. However, the discussion present in this study brings to light the need to remodel the design for education as a whole. In this context, architects, urban planners, designers, the community and policymakers must be attentive to the users' needs and desires, combining design in a creative, welcoming and safe way. CONCLUSIONS Currently, there are several debates about the reflection of the COVID-19 pandemic in education, and the answers are not yet concrete. In this context, the article sought to discuss the need to rethink the standard FNDE school projects, and propose new solutions to a social problem, through adaptive design strategies of the school environment for the postpandemic moment. It is possible that the design parameters for 21st Century schools, combined with concepts such as innovative learning environments, hybrid teaching and the inclusion of technologies can enable healthier educational spaces that promote well-being. Such adaptation strategies could be implemented when health authorities and agencies allow the resumption of face-to-face classes. However, they must, first of all, prioritize the health of the occupants. It is remarkable how COVID-19 demonstrated the importance of the design response to the user's well-being, which ranges from simple tasks such as installing information signs to more complex changes. In the study of the school project in this article, among several other points, some possible strategies to be applied are: layouts that allow the focus on the student while maintaining social distance, adequate ventilation, quality internet access, individual study spaces, connection between interior and outside, technological equipment available to students and comfortable and diverse furniture. It is difficult to say that with these re-design proposals applied, education responses to the pandemic will be resolved. However, the results of this article provide an opportunity for reflection and change. They can assist in the search for solutions to social, environmental, and public health problems within the school space, assisting public agencies, architects and designers. Bearing in mind that design is inherent in the process of designing project systems, the design strategies of this research can be useful in planning future school architectural projects. It is necessary to adapt these traditionally built schools and recreate new schools aligned to the 21st Century that are resilient, safe and flexible. Future research may include scenarios for other school buildings of different architectural types. Besides, the possibilities presented here could be applied in order to evaluate the results and their feasibility. Also, since the field of design comprises a multifaceted area, this research opens the way for ongoing studies related to school design adapted to social changes, for researchers from different areas of knowledge. Finally, it is expected that this situation, in the field of education, caused by the Covid-19 pandemic, will become, in the future, an overall learning process. For now, it is necessary to build tools to face this challenge and reinvent the school space every day. Only then, and reassessing the decisions made now, can this situation be experienced in the future.
5,049.8
2021-04-09T00:00:00.000
[ "Education", "Engineering", "Environmental Science" ]
Reverse engineering approach for improving the quality of mobile applications Background Portable-devices applications (Android applications) are becoming complex software systems that must be developed quickly and continuously evolved to fit new user requirements and execution contexts. Applications must be produced rapidly and advance persistently in order to fit new client requirements and execution settings. However, catering to these imperatives may bring about poor outline decisions on design choices, known as anti-patterns, which may possibly corrupt programming quality and execution. Thus, the automatic detection of anti-patterns is a vital process that facilitates both maintenance and evolution tasks. Additionally, it guides developers to refactor their applications and consequently enhance their quality. Methods We proposed a general method to detect mobile applications’ anti-patterns that can detect both semantic and structural design anti-patterns. The proposed method is via reverse-engineering and ontology by using a UML modeling environment, an OWL ontology-based platform and ontology-driven conceptual modeling. We present and test a new method that generates the OWL ontology of mobile applications and analyzes the relationships among object-oriented anti-patterns and offer methods to resolve the anti-patterns by detecting and treating 15 different design’s semantic and structural anti-patterns that occurred in analyzing of 29 mobile applications. We choose 29 mobile applications randomly. Selecting a browser is not a criterion in this method because the proposed method is applied on a design level. We demonstrate a semantic integration method to reduce the incidence of anti-patterns using the ontology merging on mobile applications. Results The proposed method detected 15 semantic and structural design anti-patterns which have appeared 1,262 times in a random sample of 29 mobile applications. The proposed method introduced a new classification of the anti-patterns divided into four groups. “The anti-patterns in the class group” is the most group that has the maximum occurrences of anti-patterns and “The anti-patterns in the operation group” is the smallest one that has the minimum occurrences of the anti-patterns which are detected by the proposed method. The results also showed the correlation between the selected tools which we used as Modelio, the Protégé platform, and the OLED editor of the OntoUML. The results showed that there was a high positive relation between Modelio and Protégé which implies that the combination between both increases the accuracy level of the detection of anti-patterns. In the evaluation and analyzing the suitable integration method, we applied the different methods on homogeneous mobile applications and found that using ontology increased the detection percentage approximately by 11.3% in addition to guaranteed consistency. INTRODUCTION Mobile applications take center stage in our lives today. We utilize them anywhere, at any time and for everything. We use them to peruse websites, shop, search for everything we need and for basic administration such as banking. For the importance of mobile applications, their reliability and quality are critical. Like any other applications, the initial design of mobile applications is affected by bug-settling and the introduction of new properties, which change the initial design; this can occasionally affect the quality of design (Parnas, 1994). This aspect is known as software degeneration, which can exist in the form of design flaws or anti-patterns (Eick et al., 2001). One of the most important factors in the development of software systems is improving software quality. The success of software design depends on the availability of quality elements such as maintainability, manageability, testability, and performance. These elements are adversely affected by anti-patterns (Afjehei, Chen & Tsantalis, 2019;Yamashita & Moonen, 2013). Anti-patterns are bad practice in software design. The automatic detection of anti-patterns is a good way to support maintenance, uncomplicate evolution tasks, and improve usability. In addition to the general advantages of detecting anti-patterns, we think that detecting anti-patterns provides developers with a way to ensure that the detected anti-patterns will not be repeated in applications revisions. Also, detecting anti-patterns may improve both operational characteristics and user experience. We note that there are many other approaches interested in detecting anti-patterns in the code level as introduced by Morales et al. (2016) and Alharbi et al. (2014). However, it has been noted that anti-pattern detection at the design level reduces many code anti-patterns and is more general. According to Raja (2008), engineering is the process of designing, manufacturing, assembling, and maintaining products and systems. Engineering has two types, forward engineering, and reverse engineering (RE) as presented by Raja (2008). Chikofsky & Cross (1990) defined RE as the process of analyzing software systems to identify the components of the systems and the interrelationships between them and presenting the systems in other forms or at a higher level of abstraction. The term RE according to our approach, refers to the process of generating UML diagrams followed by generating OWL ontologies of mobile applications through importing and analyzing the bytecode. Generally, we can use ontology re-engineering for direct incorporation as an Ontology development method (Obrst et al., 2014) by allowing the designer to analyze the common components dependence. Designing a pattern of mobile application remains an ongoing research challenge. The proposed approach aims to detect structural and semantic anti-patterns in the design of mobile applications as well as to show which method is better for the integration of applications. Motivated by the research mentioned above, the major contributions of this paper are sixfold: Presenting a new method for generating OWL ontology of mobile applications. Presenting a general method for enhancing the design of a pattern of a mobile application. Illustrating how the proposed method can detect both structural and semantic anti-patterns in the design of mobile applications. Describing how we evaluate the proposed method in 29 mobile applications. Showing how it detects and treats 15 designs' semantic and structural anti-patterns that appeared 1,262 times. Showing how semantic integration among mobile applications decreases the occurrences of anti-patterns in the generated mobile application pattern. Analyzing the relationships among the object-oriented anti-patterns and the detection tools. In the rest of the paper, we subsequently present the related work. Next, we present some basic definitions, and the details of the proposed approach is described. After that, the empirical validations of the proposed method are presented, followed by the results and discussion. Finally, the concluding remarks are given, along with scope for future work. RELATED WORKS Many empirical studies have demonstrated the negative impact of anti-patterns on change-proneness, fault-proneness, and energy efficiency (Romano et al., 2012;Khomh et al., 2012;Morales et al., 2016). In addition to that, Hecht et al. (2015a), Chatzigeorgiou & Manakos (2010), Hecht, Moha & Rouvoy (2016) observed an improvement in the user interface and memory performance of mobile apps when correcting Android anti-patterns. They found that anti-patterns were prevalent in the evolution of mobile applications. They also confirmed that anti-patterns tend to remain in systems through several releases unless a major change is performed on the system. Many efficient approaches have been proposed in the literature to detect mobile applications' anti-patterns. Some researchers concentrate on ensuring that the soft is free of contradictions which are called consistency. Alharbi et al. (2014) detected the anti-patterns related to inconsistency in mobile applications that were only related to camera permissions and similarities. Joorabchi, Ali & Mesbah (2015) detected the anti-patterns related to inconsistency in mobile applications using a tool called CHECKCAMP that was able to detect 32 anti-patterns related to inconsistencies between application versions. Hecht et al. (2015b) used the Paprika approach to detect some popular object-oriented anti-patterns in the code of mobile applications using threshold technique. Linares-Vásquez et al. (2014) detected 18 object oriented (OO) anti-patterns in 1,343 Java mobile applications by using DÉCOR. This study focused on the relationship between smell anti-patterns and application domain. Also, they showed that the presence of anti-patterns negatively impacts software quality metrics; in particular, metrics related to fault-proneness. Yus & Pappachan (2015) analyzed more than 400 semantic Web papers, and they found that more than 36 mobile applications are semantic mobile applications. They showed that the existence of semantic helps in better local storage and battery consumption. The detection of semantic anti-patterns will improve the quality of mobile applications. Palomba et al. (2017) proposed an automated tool called A DOCTOR. This tool can identify 15 Android code smells. They made an empirical study conducted on the source code of 18 Android applications and revealed that the proposed tool reached 98% precision and 98% recall. A DOCTOR detected almost all the code smell instances existing in Android applications. Hecht et al. (2015b) introduced the PAPRIKA tool to monitor the evolution of mobile application quality based on anti-patterns. They detected the common anti-patterns in the code of the analyzed applications. They detected seven anti-patterns; three of them were OO anti-patterns and four are mobile applications anti-patterns. Reverse engineering is the process of analyzing software systems to identify the components of the systems and the interrelationships between them and presenting the systems in other forms or at a higher level of abstraction (Chikofsky & Cross, 1990). In this paper, we used RE to transfer code level to design level for detecting mobile applications' anti-patterns. RE techniques are important for understanding the construction of the user interface and algorithms of applications. Additionally, we can know all the properties of the application, its activities, and permissions and can read the Mainfest.xml of the applications. RE techniques have been used with mobile applications for many purposes not just for detecting anti-patterns. Song et al. (2017) used RE for improving the security of Android applications. While Zhou et al. (2018) used the RE technique to detect logging classes and to remove logging calls and unnecessary instructions. Also, Arnatovich et al. (2018) used RE to perform program analysis on a textual form of the executable source and to represent it with an intermediate language (IL). This IL has been introduced to represent applications executable Dalvik (dex) bytecode in a human-readable form. ONTOLOGY AND SOFTWARE ENGINEERING According to the IEEE Standard Glossary of Software Engineering Terminology-Description (1990), software engineering is defined as "the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software." Also, from the knowledge engineering community perspective, computational ontology is defined as "explicit specifications of a conceptualization." According to Calero, Ruiz & Piattini (2006), Happel & Seedorf (2006), the importance of sharing knowledge to move the software to more advanced levels require an explicit definition to help machines interpret this knowledge. Happel & Seedorf (2006) decided that ontology is the most promising way to address software engineering problems. Elsayed et al. (2016) proofed the similarities in infrastructures between UML and ontology components. They proposed checking some UML quality features using ontology and ontology reasoning services to check consistency and redundancies over UML models. This would lead to a strong relationship between software design and Ontology development. In software engineering, ontologies have a wide range of applications, including model transformations, cloud security engineering, decision support, search, and semantic integration (Kappel et al., 2006;Aljawarneh, Alawneh & Jaradat, 2017;Maurice et al., 2017;Bartussek et al., 2018;De Giacomo et al., 2018). Semantic integration is the process of merging the semantic contents of multiple ontologies. The integration may be between applications that have the same domain or have different domains to take the properties of both applications. We make ontology integration for many reasons: to reuse the existing semantic content of applications, to reduce effort and cost, to improve the quality of the source content or the content itself, and to fulfill user requirements that the original ontology does not satisfy. PROPOSED METHOD In this section, we introduce the key components of the proposed method for analyzing the design of mobile applications to detect design anti-patterns, and for making semantic integration between mobile applications via ontology reengineering. The proposed method for anti-pattern detection consists of three main phases and is summarized in Fig. 1. Also, there is an optional phase called the integration phase. 1. The first phase presents the process of reformatting the mobile application to Java format. 2. The second phase presents the reverse-engineering process. In this phase, we used RE to reverse the Java code of mobile applications and generating UML class diagram models. Additionally, many design anti-patterns were detected. The presented reverse approach is accurate enough to analysis the information that we need about APK to reverse UML models of the applications. 3. The third phase completes the anti-patterns detection and correction processes. This phase converts UML mobile application model to OWL ontology, then analyzes the relationships among object-oriented anti-patterns and offers methods to resolve the anti-patterns related to semantic and inconsistency. After that, we can regenerate the Java code of mobile applications. The developer can ensure that anti-patterns in existing applications will not be repeated in application revisions and may improve both operational characteristics and user experience. 4. The integration phase is an optional fourth phase. In this phase, we integrate two applications by merging the OWL ontologies of both applications. From these two ontologies, we will yield one integrated application for doing both services with minimum anti-patterns. We will present in detail the rationale provided for why this integration is needed as an optional phase if we need. The integration of mobile applications The integration process is most for the inclusion of new skill sets for applications such as IOT or monitoring applications or potentially voice-activation integration into an existing application. But, here we were interested in presenting a new manner for homogenous integration to combine the advantages of two mobile applications in a new pattern. In this section, we provided a rationale for why this integration is needed and presenting the integration as an extra phase if we need where the other detection phases do not change. Patterns are advanced methods to develop any mobile applications. The integration or merging of mobile applications is a good step in mobile application development. The advantage of the integration of mobile applications is in responding to the puzzling selection of the appropriate application from a set of applications. This will achieve the same objective if each application has a different advantage and the developer wants to start to improve pattern combines all advantage without anti-patterns. To clear our idea, we choose two homogenous applications: Viber and WhatsApp. They are the most popular messaging and Voice Over IP applications. Both Viber and WhatsApp are similar in services, features, security, and cost. There is plenty to like about both applications: they produce the same services as end-to-end encryption, support groups and video calls, support on any operating system, allow transmission of documents or multimedia, and work over 3G, 4G, and Wi-Fi. Well, both are fantastic in their way, but which one is better for the developer as a pattern for refinement? We found that Viber had been offering both video and voice calling for a far longer time than WhatsApp and has a hidden chat feature. Also, Viber permits the user to play a list of games with other Viber contacts. However, WhatsApp is popular and easy to use. We can make the integration of both applications and take the best skills of both. We imagine that when producing a new application we can directly integrate it to the old one without replacing. In the case of heterogonous integration applications, the developer, for example, may want to develop a new health care hybrid application. From the website "free apps for me" (https://freeappsforme.com/), a developer can find at least seven applications for measuring blood pressure. All of them are free and available on a different platform. There are also at least 13 diabetes applications. When a developer merge two applications (one for measuring blood pressure as the "Smart Blood Pressure" application and the other for controlling diabetes as the "OneTouch Reveal" application), the integration phase will yield one integrated application for doing both services, with minimum anti-patterns. Then the developer can add the new relations between these disease controller without conflict. The integration allows the combination of the skills of both applications to get new mobile application pattern. These two examples of two types of integration answer the question of why we need to integrate mobile applications. We suggest using the integration pattern, then comparing between the two integration proposed methods to select the suitable one. The first integration method is for after decompiling the APK of the applications. We use RE methodology for generating one UML class diagram of both applications. Then we start the detection of the anti-patterns process for the integrated application (Fig. 2). The second integration method is through merging the OWL ontologies of both applications using the Prompt plugin in protégé as the ontology editor as introduced in Fig. 3. The implementation In this section, we propose the implementation of the proposed detection method and determine which packages are suitable for each phase. The first phase: APK files are zip files used for the installation of mobile apps. We used the unzip utility for extracting the files stored inside the APK. It contained the AndroidManifest.xml, classes.dex containing the Java classes we used in the reverse process, and resources.arsc containing the meta-information. We de-compiled the APK files using apktool or Android de-compiler. Android de-compiler is a script that combines different tools to successfully de-compile any (APK) to its Java source code and resources. Finally, we used a Java de-compiler tool such as JD-GUI to de-compile the Java classes. JD-GUI is a standalone graphical utility that displays the Java code of ". class" files. The input of the first phase was the APK file of the mobile application and the output was the Java classes of the APK application. JD-GUI is accurately enough to generate the Java code that we use to reverse the models of the applications. The second phase: We used a RE approach for generating the UML class diagram models of the mobile applications. Elsayed, El-Dahshan & Ghannam (2019) compared between UML tools, the authors found that Modelio 3.6 is a suitable tool for modeling and detecting UML design anti-patterns. The UML class diagram was generated by reversing the Java binaries of the mobile app. Detecting anti-patterns in the UML model is the first step in the detection process. The input of the second phase was classes.java of the app and the output was the UML class diagram model of the app with a list of the detected anti-patterns. The third phase: By converting the model to XML format, we could generate it as an OntoUML model in OLED, which is the editor of OntoUML for detecting semantic anti-patterns. OntoUML is a pattern-based and ontologically well-founded version of UML. Its meta-model has been designed in compliance with the ontological distinctions of a well-grounded theory named the unified foundational ontology. OLED editor also supports the transformation of the OLED file to the OWL ontology of the mobile app, allowing the detection of inconsistency and semantic anti-patterns using the "reasoner" ontology in Protégé. Protégé is the broad ontology editor commonly used by many users. The integration phase (the fourth optional phase): we propose two methods for integrating mobile applications. The first method is merging the UML models at the second phase when we reverse the models from Java code and then completing the detection phases over the integrated application. The second method is merging the OWL ontologies of the both applications using a Prompt (Protégé plugin) to generate one OWL ontology pattern. Figure 4 shows the both applications "Viber and WhatsApp" components before merging. Figure 5 shows the integrated application; Fig. 5 has three tabs (classes, slots, and instances) which are the components of the ontology. Every tab shows the components of its type after integration. Finally, we used "Reasoner in Protégé" to check the consistency after integration. EMPIRICAL VALIDATIONS We assessed our approach by reporting the results we obtained for the detection of 15 anti-patterns on a random sample of 29 popular Android applications downloaded randomly from the APK Mirror. Table 1 presents the downloaded applications from the APK Mirror. We selected some popular applications such as YouTube, WhatsApp, Play Store, and Twitter. The size of the applications included the resources of the application, as well as images and data files (Table 1). The research study included the identification and repetition of anti-patterns across different domains and different sizes. Case study on "Avast Android Mobile Security" To explain the proposed method, we presented a snapshot of it in a different case study "Avast Android Mobile Security." The case study is one of the 29 mobile applications that is proposed in this article for the evaluation of the proposed method. The case study is downloaded from the APKMirror. The "Avast Android Mobile Security" secures the devices against phishing attacks from emails, phone calls, infected websites, or SMS messages. Also, it has many other features as Antivirus Engine, App Lock, Call Blocker, Anti-Theft, Photo Vault, virtual private network, and Power Save. The reason for choosing "the Avast Android Mobile Security" application as a case study is that it has the maximum number of the detected anti-patterns using the proposed method. Using the reverse methodology, we generated the UML class diagram model of the Java classes in Modelio. The model includes the classes, subclasses, class attributes, operations, and the associations between them (Fig. 6). After generating the UML class diagram of the application in Modelio, we detected 229 repeated anti-patterns in the "Avast Android Mobile Security." The anti-patterns are shown in Fig. 7. The number and the location of the anti-patterns were determined. There were 10 detected anti-patterns (without repeat): "NameSpaces have the same name," "NameSpace is Leaf and is derived," "NameSpace is Leaf and is abstract," "Generalization between two incompatible elements," "A public association between two Classifiers one of them is public and the other is privet," "Classifier has several operations with the same signature," "Classifier has attributes with the same name," "The status of an Attribute is abstract and class," "A destructor has two parameters," and finally "MultiplicityMin must be inferior to MultiplicityMax." Figure 8 shows a sample of them. To convert the UML model to XML format, we converted it into an enterprise architecture file then converted it to an OLED file. In the "Avast Android Mobile Security" OLED file, we validated the model for detecting the anti-patterns. The detected anti-patterns in the different apps were: association cycle anti-patterns, Binary relations with overlapping ends anti-patterns, imprecise abstraction anti-patterns, and relation composition anti-patterns. After anti-patterns detection using OntoUML editor, OLED supports the transformation of OLED file to the OWL ontology. We checked the inconsistency anti-patterns using the reasoner of the ontology editor (Protégé). The reasoner detected the anti-patterns related to inconsistency as (similar name, multiplicity constraints, and cyclic inheritance). Using the reasoner of ontology over the case study, we detected the anti-patterns in the classes that have the anti-patterns NameSpaces have the same name, classifier has several operations with the same signature, classifier has attributes with the same name, and MultiplicityMin must be inferior to MultiplicityMax, which we detected after generating the class diagram in Modelio, and detected the anti-pattern (association cyclic) which was detected via OLED. The treatment or correction of the detected anti-patterns is classified into the following: Modelio presents the solution as a list of recommendation which developer can do it manually. In this case study, Table 2 presents the anti-patterns and the method of correction. OLED presents automatic solutions to correct the anti-patterns which we list in Table 3. Reasoner in Protégé presents all inconsistency anti-patterns where as Reasoner gives just the location of the inconsistent classes as in Fig. 9. RESULTS AND DISCUSSION We applied our proposed method on a sample of 29 Android applications, which we downloaded from the APK Mirror. The results present the detected anti-patterns in the 29 mobile applications and the relation between the different types of anti-patterns. The proposed method detected 15 anti-patterns. The total number of anti-patterns that appeared in the 29 applications was 1,262 anti-patterns. We classified the anti-patterns according to their existence in the UML class diagram components. The occurrences of the anti-patterns are given in Table 4. Every group has the anti-patterns that were detected in it. For example, the group "Anti-patterns in Operations" presents all anti-patterns that were detected in the operations using the three tools. Table 5 shows the detected anti-patterns in each application using the proposed method and the total number of anti-patterns in the 29 mobile applications. We found that the "anti-patterns in the class" group is the most commonly detected anti-pattern in Android applications. The "anti-patterns in operation" is the least commonly appeared anti-pattern (Fig. 10). We measured the relations between anti-patterns groups using correlation coefficient. Correlation coefficient is a statistical measure of the degree to which changing the value of one variable predict changing to the value of the other. A positive correlation indicates that the extent to which those variables increase or decrease in parallel. While a negative correlation indicates the extent to which one variable increases as the other Table 3 OntoUML anti-patterns and the correction way. The anti-pattern The method of correction Association cycle Chang the cycle to be closed or open cycle Binary relation with overlapping ends Declare the relation as anti-reflexive, asymmetric, and anti-transitive Imprecise abstraction Add domain-specific constraints to refer to which subtypes of the association end to be an instance of the other end may be related Relation composition Add OCL constraints which guarantee that if there is a relation between two types and one of them has subtypes, there must be constraints says that the subtypes are also in a relation with the other type Relation specialization Add constraints on the relation between the type and the super-type, declaring that the type is to be either a specialization, a subset, a redefinition or disjoint with relation SR MultiplicityMin must be inferior to MultiplicityMax Change the value of the minimum multiplicity to be less than the maximum multiplicity The status of an Attribute is abstract and class at the same time Set only one of the statuses to true A destructor has parameters Remove these parameters or remove the destructor stereotype from the method decreases. Table 6 presents the correlations between anti-patterns groups. The tool can detect certain group, it also can detect in parallel the other as attributes anti-patterns with operations anti-patterns. Also, appearance of attributes anti-patterns in certain applications indicates the appearance of operations anti-patterns strongly. Then the correlation between the five groups of anti-patterns is used to know if the existence of any type of them implies the existence of other type. There was a strong negative correlation (-0.1) between namespaces anti-patterns and association anti-patterns. Also, a strong positive correlation (0.8) between attributes anti-patterns and operations anti-patterns. Table 5 The anti-patterns in each app. Also, we analyzed the correlation between the detection tools of the proposed method ( Table 7). The greatest correlations were between Modelio and Protégé. For assessing the direct relation between Protégé and Modelio, we calculated the statistical means of anti-patterns which were detected by each tool (Modelio, Protégé, and OLED) on 29 mobile applications as in Fig. 11. Figure 11 shows the similarity between both the means of Protégé and Modelio as the result of the correlation. Now, we want to statistically answer the question "Do we need to use the three tools" and "is there a relation between them?" Mobile app In order for statistical analysis to explain the relation among the three tools and the antipatterns' groups, we used the analysis of variance ANOVA test. This is to determine whether there are any statistically significant differences between the means of antipatterns detection by each one of the tools, and also to determine if there is any relation between anti-patterns groups and the features of mobile applications. We use ANOVA to calculate a test (F-ratio) with which we can obtain the probability P-value (usually taken as P < 0.05) suggests that at least one group mean is significantly different from the others. The null hypothesis is (all population means are equal). The alternative hypothesis is (at least one population mean is different from the rest). Where the degree of freedom (df) between groups is 28 and df within the group is 116. We found that the significant differences are 0.578, 0.464, and 0.926 for Protégé, Modelio, and OLED, respectively. This implies that the null hypothesis is false, i.e., all the detection tools are necessary and required for the detection of the anti-patterns. The ANOVA statistically proved that there was no concern for the features or the specifications of the applications; that is, the low F-value meant that the groups are close together relative to the variability within each group. We separated the result of integration phase because it is an optional phase. In the case of homogeneous applications, we found that the number of the detected anti-patterns in the output application was not the same. The detected anti-patterns using the ontology integration tool Prompt was less than the number of anti-patterns detected by using the There is a reverse correlation between Protégé and OntoUml editor Figure 11 The means of the detected tools. Full-size  DOI: 10.7717/peerj-cs.212/ fig-11 Modelio tool. This indicates that semantic integration decreases the increases the accuracy of detecting anti-patterns in mobile applications. Table 8 shows the number of antipatterns in each application in the integration case study (Viber and WhatsApp) and the number of them in the mobile application pattern after merging. The enhancement using ontology is approximately by 11.3% in addition to a consistency check. Where the formula to calculate the increasing percent between two values is Substitute in Eq. (1) by The first value is the total number of anti-patterns according to using Modelio = 115. The second value is the total number of anti-patterns according to using Prompt = 128. Then the percent is increased by ≅ 11.3% which implies that using ontology integration by Prompt (Protégé plugin) instead of using UML integration by Modelio increases the percent of detection. Additionally, using ontology to separately refine Viber or WhatsApp as a pattern enhanced them approximately 4.04% and 89%, respectively, in addition to a consistency check by "Reasoner." CONCLUSIONS In this paper, we focused on improving the quality of mobile applications. We introduced a general method to automatically detect anti-patterns not by using specific queries, but by using Modelio, OLED, and Protégé in a specific order to get positive results. Also, concerning the related work section, our proposed method is more general than other methods as the proposed method supports semantic and structural anti-pattern detection at the design level. For evaluation of the proposed method, we applied it on a sample of 29 mobile applications. It detected 15 semantic and structural design anti-patterns. According to the proposed classification of anti-patterns, "the anti-patterns in the class group" was the most frequent anti-pattern, and "the anti-patterns in the attribute group" was the least frequent. From the perspective of anti-patterns detection, the analysis of results also showed that there is a correlation between the Modelio and Protégé platforms. Also, there is no correlation between OLED and Protégé and no correlation between Modelio and OLED. We found that using ontology in the integration phase increases the detection percentage approximately by 11.3% and guarantees consistency which is assessed by the reasoner of the ontology. Accordingly, semantic ontology integration has a positive effect on the quality of the new application. This helped with developing a correct, consistent, and coherent integrated pattern that has few anti-patterns. Finally, we recommend that the developer, before using any mobile application as a pattern, should check the design of the selected application against the anti-pattern. When a developer concerned with avoiding certain anti-patterns type, the correlations between anti-patterns groups, and between tools will help him. Also, the proposed method considered the issues and problems of developers who are revising Android applications and integrating new packages of code skill sets. A code review such as the methodology proposed could be very valuable in terms of not carrying forward existing anti-patterns and not incorporating new code flawed with poor design. The reverse deeply in OWL ontology of a mobile application very useful. In the future, we are going to solve the problem of big ontologies which cannot be opened in ontology editors as Protégé to complete the detection process. Although, detection of anti-patterns at the design level is very useful and reduces some anti-patterns in the code level, we will refine the metric method for detecting code level anti-patterns on big ontology. Also, we will create a semantic web application for anti-patterns to collect all detection tools of the two levels and anti-patterns catalog. Finally, the correction phases in Modelio and Reasoner are still open issues. ADDITIONAL INFORMATION AND DECLARATIONS Funding The authors received no funding for this work.
7,597.8
2019-04-04T00:00:00.000
[ "Computer Science" ]
Stochastic models associated to a Nonlocal Porous Medium Equation The nonlocal porous medium equation considered in this paper is a degenerate nonlinear evolution equation involving a space pseudo-differential operator of fractional order. This space-fractional equation admits an explicit, nonnegative, compactly supported weak solution representing a probability density function. In this paper we analyze the link between isotropic transport processes, or random flights, and the nonlocal porous medium equation. In particular, we focus our attention on the interpretation of the weak solution of the nonlinear diffusion equation by means of random flights. Introduction We deal with a Nonlocal Porous Medium Equation (NPME) studied in [3,4], given by the following degenerate nonlinear and nonlocal evolution equation subject to the initial condition where u := u(x, t), with x := (x 1 , . . . , x d ) ∈ R d , d ≥ 1, is a scalar function defined on R d × R + and ∂ t := ∂/∂t. The pseudo-differential operator ∇ α−1 is the fractional gradient denoting the nonlocal operator defined as ∇ α−1 u := F −1 (iξ||ξ|| α−2 F u), where the Fourier transform F and the inverse transform F −1 of a function v ∈ L 1 (R d ) are defined by If we restrict our attention to nonnegative solution u(x, t), the equation (3) becomes which is usually adopted to model the flow of a gas through a porous medium. The reader interested in the theory of porous medium equation can consult, for instance, [33]. Other types of nonlocal porous medium equations have been proposed in literature. For instance, [5,6] introduced the porous medium equation with fractional diffusion effects ∂ t u = div(u∇p), with nonlocal pressure p := (−∆) −s u, 0 < s < 1, and u ≥ 0. For α = 2 − 2s ∈ (0, 2) we obtain the equation (1) with m = 2; i.e. ∂ t u = div(u∇ α−1 u). In [32] the nonlinear diffusion equation (5) is generalized as follows with m > 1 and initial condition u(x, 0) = u 0 (x) which is nonnegative bounded with compact support or fast decaying at infinity. The main contribution in [32] concerns the study of the property of finite/infinite speed of propagation of the solutions to (6) with varying m. The following equation is studied in [34], where it is also proved that the self-similar solutions of (7) enjoy the L 1 -contraction property and then they are unique. Nevertheless, these solutions are not compactly supported. Explicit self-similar solutions to (6) and (7) have been obtained by [20] for particular values of m. The main goal of this paper is to investigate the relationship between (1) and some random models. In particular, we focus our attention to the probabilistic interpretations of the weak solution to NPME. The idea to study stochastic processes associated to the classical porous medium equation (4) was developed by different authors; see, for instance, [21-23, 12, 13, 24, 29]. In the listed papers the authors introduced different types of Markov chains on lattice and interacting particle systems having a dynamic which macroscopically converges to the solution of (4). By [17], the Barenblatt solution of (4) can be viewed as the mean of the first passage time of a symmetric stable process to exterior of a ball. In [1], the authors provided a probabilistic interpretation of (4) in terms of stochastic differential equations. Recently, [11] highlighted the connection between (4) and the Euler-Poisson-Darboux equations by taking into account time-rescaled random flights. Up to our knowledge, this paper is the first attempt concerning the probabilistic interpretation of the fractional porous medium equation (1). Similarly to [11], we can exploit stochastic models defined by continuous-time random walks in R d , d ≥ 1, arising in the description of the displacements of a particle choosing uniformly its directions; i.e. the so-called isotropic transport processes or random flights. In a suitable time-rescaled frame, the probability law of the above processes is given by the solution (8) below. Therefore, this paper represents a generalizations of some results contained in [11]. We point out that the proposed random processes recover some features of the Barenblatt weak solution (8) to nonlinear evolution equations like finite speed of propagation and the anomalous diffusivity. For this reason the random flights seem to represent a natural way to describe the real phenomena studied by means of (1). In Section 2, we recall the definition of weak solution to (1) as well as its basic properties. In Section 3 the isotropic transport processes are introduced. Furthermore, Section 3 contains our main results; i.e. Propositions 3.1, 3.2. From these propositions we are able to give a reasonable interpretation of the solutions to (1). In the last section we sum up the main contribution of the paper. A review on the weak solutions to NPME Let us recall the definition of weak solution to the nonlocal operator equation (1) and its main properties (see [4]). ) has a compact support in the space variable x and vanishes near t = T . It is crucial to observe that the constant C appearing in (8) guarantees the mass conservation and then u(x, t) (as well as u 0 (x)) is a probability density function with compact support B ct β := {x ∈ R d : ||x|| ≤ ct β }. By setting R 2 = 1 k 2/α , the solution (8) coincides with (2.4) in [4]. We point out that NPME has the property of finite speed of propagation. We are able to explain this property as follows. The solution to NPME is a continuous function u(x, t) such that for any t > 0 the profile u(·, t) is nonnegative, bounded and compactly supported. Hence, the support expands eventually to penetrate the whole space, but it is bounded at any fixed time. Therefore, for fixed t > 0, the support of (8) is given by the closed ball B ct β , while the free boundary (that is the set separating the region where the solution is positive) is given by the sphere S d−1 The finite speed of propagation of NPME is in contrast with the infinite speed of propagation of the classical heat equation; that is, a nonnegative solution of the heat equation is positive everywhere in R d . The next proposition contains the explicit Fourier transform of (8). A similar result has been already proved, for instance, in [4], Lemma 4.1. Proposition 2.1. The Fourier transform of the probability density function u(x, t) given by (8) is equal tô Proof. We prove the theorem for d ≥ 2. The case d = 1 follows by simple calculations. Let σ be the measure on S d−1 3 Isotropic transport processes related to NPME In this section, we analyze the link between the weak solution of the nonlocal equation (1) and the transport processes. We follow the approach developed in [11]. Let us start with introducing isotropic transport processes and recalling their main features. An isotropic transport process, also called random flight, is a continuous-time random walk in R d described by a particle starting at the origin with a randomly chosen direction and with finite speed c > 0. The direction of the particle changes whenever a collision with some scattered obstacles in the environment happens and then a new direction of motion is taken. For d ≥ 2, all the directions are independent and identically distributed. The directions are chosen uniformly on the sphere S d−1 1 = {x ∈ R d : ||x|| = 1}. For d = 1, we have two possible directions alternatively taken by the moving particle. The random flights have been studied, for instance, in [30,31,9,26,27,10,7,18,28]. Recently, in [14,15] the relationship between the isotropic transport processes and some fractional Klein-Gordon equations has been analyzed. Furthermore, stochastic models like random flights are associated to the Euler-Poisson-Darboux partial differential equations as argued in [16]. Rigorously speaking, we introduce the isotropic transport processes as follows. Let (T k , k ∈ N 0 ) be a sequence of random arrival epochs with T 0 := 0. Furthermore, let (V k , k ∈ N 0 ) be a sequence of random variables defined for d = 1, by ) denotes the Borel class on S d−1 1 . We assume that during the interval [0, t] the particle takes a new direction, V 0 , V 1 , . . . , V n , n + 1 times at random moments T 0 , T 1 , . . . , T n , respectively. Therefore, we can define an isotropic random flight on (Ω, (F n t , t ≥ 0)) as follows where X n (t) stands for the position, at time t ≥ 0, reached by the moving particle according to the mechanism described above and (V n (t), t ≥ 0) is the jump process Therefore X n (t) represents a random motion with finite velocity c and X n (t) ∈ B ct a.s. for a fixed t > 0. The components of X n (t) can be written explicitly as in formula (1.6) of [10]. Important assumptions in our paper are: the random vector of the renewal times (τ 1 , . . . , τ n ), where τ k+1 := T k+1 − T k , has the joint density equal to and for d ≥ 2, or The distributions (14) and (15) (13) and (14) have been used in [8] to generalize the family of random walks defined above. In the one-dimensional case the process (12) is the well-known telegraph process and admits the density given by (see [9]) We observe that for n odd, we have that Under the assumptions (14) and (15), [10] provides (Theorem 2 in [10]) the explicit density functions of the random flights X n (t); that is, Remark 3.1. It is easy to check that the sequence of random flights X n , n ∈ N, admits the following scaling property P aX n (t/a) ∈ dx = P X n (t) ∈ dx , a > 0. Hereafter, we discuss the main results of the paper; i.e. Propositions 3.1, 3.2 below. Therefore, we provide a reasonable probabilistic interpretation of the weak solution (8) in terms of a time-rescaled random flights. From the features of X n it emerges that the random flights share with (8) the crucial property of finite speed of propagation in the space. For this reason the transport process (12) seems to represent a fine choice to model phenomena described by nonlinear diffusion equation with nonlocal pressure (1). Our first result is the following theorem and it represents a generalization of Theorem 1 in [11]. Proof. Let n ∈ N. We observe that path map is continuous and then Y n is a continuous process. Therefore, Y n is progressively measurable if it is adapted to (G n t , t ≥ 0) (see, e.g., Proposition 1.13, [25]). Let t β > 0, (s, ω) → V n (s, ω), ω ∈ Ω, s ≤ t β is a B([0, t β ]) ⊗ G n t -measurable function. Hence, by Fubini's theorem one has that the map ω → c t β 0 V n (s, ω)ds is G n tmeasurable and then the process Y n is adapted to the filtration (G n t , t ≥ 0). By rescaling the time coordinate as follows the solution (8) to NPME becomes where c := c(α, d) := 1/k 1 α . Let us deal with a telegraph process defined by (12) with time scale t ′ and speed c. By exploiting the duplication formula for the Gamma function we can write the solution (18) for d = 1 as follows For the solution (19) coincides with the first part of (16), while for the solution (19) coincides with the second part of (16). For n > 2, in both cases m ∈ (1, ∞). Therefore, we can conclude that Now, let us consider a random flight defined in R d , d ≥ 2, by (12) with time scale t ′ and speed c defined above. Under the assumption (14), for the function (18) coincides with the first part of (17). Since m ∈ (1, ∞), we infer that For d = 2 the inequality (20) holds for n ≥ 3; for d = 3, it holds for n ≥ 2; for d > 3, (20) holds for all n ≥ 1. Therefore, under the condition (20) Analogously, under the assumption (15), for the function (18) coincides with the second part of (17). Since m ∈ (1, ∞), we infer that d > 2 n + 2. For d = 3 the inequality (20) holds for n ≥ 3; for d = 4, it holds for n ≥ 2; for d > 4, (20) holds for all n ≥ 1. Therefore, under the condition (21) To enhance the features of the random models Y n , n ≥ 1, it is useful to introduce the Euclidean distance process R n := (R n (t), t ≥ 0); that is R n (t) := ||Y n (t)||. For a fixed t ≥ 0, R n (t) ∈ [0, ct β ] a.s. The next result will be useful for arguing on the anomalous diffusivity of Y n . 1) the probability density function of R n becomes: 2) let p ≥ 1 and d ≥ 2; then √ π (ct β ) p , p even; 3) the rescaled process ( X n (t β ) ct β , t ≥ 0) has the distribution law independent from the time t and with compact support B 1 ; i.e. For d = 1 the result (24) follows by similar calculations. 3) For fixed t > 0, the result (3.2) is derived from (18), by applying the Jacobian theorem to the bijection g : R d → R d with g(x) = 1 ct β x. By the same calculations leading to (9), we can prove that the Fourier transformŵ(ξ, t) holds true. 4) It is an immediate consequence of the point 1). The Barenblatt-Kompanets-Zel'dovich-Pattle solution to the classical PME does not spread in the space linearly over the time and then we can argue that the phenomena described by the equation (4) represent anomalous diffusion (see, for instance, [33]). Similar considerations hold for (8). By means of Theorem 3.2, we infer that the stochastic models Y n , n ≥ 1, behave similarly to an anomalous diffusion. From (23) and (24), we observe that Var R n (t) = O t 2β , t > 0. Conclusions We are able to provide a probabilistic interpretation of the weak solution (8) to NPME. In particular, we deal with random flight models (12) with a suitable rescaling of the time coordinate. These random processes enjoy the main features of (8), at least for particular values of m: • finite speed of propagation property with compact support given by a closed ball; • spread over the space like t 2β ; i.e. anomalous diffusivity depending on the values of the fractional parameter α. In conclusion, the isotropic transport processes seem to describe well the real phenomena studied by means of the degenerate nonlinear diffusion equation with fractional pressure (1).
3,599.4
2019-02-04T00:00:00.000
[ "Mathematics" ]
Reading and Visual Search: A Developmental Study in Normal Children Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. Introduction A good eye movement control, in particular saccades and fixations, is essential for reading. Several studies have examined ocular motor behaviour in children and adults subjects. Buswell [1], Rayner [2] and McConkie et al. [3] showed that younger children have longer and more frequent fixations, smaller saccades, and frequent regressive saccades (leftward saccades). In line with these findings, a review article, Levy-Schoen and O9Regan [4] reported developmental aspects of reading and they described that reading speed increased with increasing of children's age and reading skill capabilities. Moreover, the duration of fixations decreased also with age reaching adult level at about 11 years old. Reading capabilities also increase as children grow, leading to an improvement of these ocular motor performance [5]. Another important difference bewteen children and adults is the so-called 'perceptual span' area, i.e. the section of text from which a subject can extract useful information from reading. Rayner [2] reported that this area is smaller in beginner readers than in proficient adult readers. This difference is consistent within a child population: 7 year-olds process a smaller area of text during fixation than older children (11 year-olds). This result could partially explain why younger children are slower readers. Interestingly, a reduced perceptual span has also been suggested to be the cause of reading difficulties in dyslexics [6]. Until 2006 the majority of research dealing with eye movements in reading was limited to measuring movements from only one eye. However, reading is an activity requiring saccades and vergence eye movements: horizontal saccades bring the eyes to successive words but the vergence angle between the two eyes needs to be adjusted to the distance of the word for appropriate fusion of the two retinal images to take place. Bucci & Kapoula [7] found that the binocular coordination of saccades while reading single words in 7 year-old children is significantly worse than in adults; such poor coordination is observed both when reading single words and when fixating a LED light. The authors suggested that fine binocular motor control, as is needed for reading, develops via learning mechanisms based on the interaction between the saccadic and the vergence systems. Poor binocular control in young readers could explain the long fixation durations observed and would interfere with the process of learning to read. Developmental aspects of binocular saccade coordination have been previously reported by Fioravanti, Inchingolo, Pensiero & Spanio [8]. These developmental aspects in children were noted by recording horizontal saccades to LED-targets. The authors compared saccade characteristics of 3 groups of subjects: young children aged 5-9, older children aged [11][12][13] and adults. The authors showed similar ocular motor behaviour between the older children and the adult group, i.e., smaller saccade disconjugacy. In contrast, the younger group of children reported larger saccade disconjugacy. The authors attributed this effect to a "poor compensation of mechanical asymmetries of the orbital planes existing in young children". Bassou, Granié, Pugh, & Morucci [9] were the first to record binocular eye movements in 10 year olds while reading a text. The authors showed that saccades of the two eyes can be highly disconjugate in children of this age, suggesting that Hering's law [10] where both eyes are well yoked because they receive equal innervation, is not always obeyed during reading. Recall that Hering postulated that the brain sends a unique command to each eye, so that they move as a uniform organ. Bassou and collaborators pointed out that poor binocular control in children could interfere with learning to read. Note however, that due to the low resolution of the recording system used (i.e. an EOG device), these results were only on percentage of asymmetry of the amplitude of saccades between the two eyes. Cornelissen, Munro, Fowler & Stein [11] compared eye movements during reading word lists in a group of twenty children aged 9-10 and ten young adults. Eye movements were recorded by an infra-red system (IRIS, SKALAR). They found that children showed significant poorer binocular coordination when fixating words than the adults. Blythe et al. [12] compared binocular coordination in twelve children (aged 7 to 11) and in twelve young adults (aged 18-21). They measured the binocular coordination at the beginning and at the end of the fixation, and found that children showed significantly larger disconjugate fixations than adults. Furthermore, fixations in children were more divergent than in adults who showed more frequent convergent fixations. Taken together, these differences between children and adults are in line with the hypothesis that children's ocular motor control is immature. A recent study from our group (Bucci, Nassibi, Gerard, Bui-Quoc, & Seassau [13]) explored the quality of binocular coordination during reading and during visual search in groups of dyslexic and non dyslexic children of various ages. For non dyslexic children, we reported that the disconjugacy measured during and after the saccade was significantly smaller in [10][11][12] year-olds than in 8-9 year-olds. Furthermore, young children made smaller saccade amplitudes, and tended to fixate more often and for longer than in older children. Such ocular motor behaviour has been observed both while reading and in a visual search task, suggesting an immaturity of the ocular motor saccade and interaction of vergence systems. Based on these studies, the quality of binocular coordination during and after saccades seems to be under-developed in children. It should be noted however that the mentioned studies examined a small number of children or examined children with a large range of ages. The purpose of the present study is to further examine binocular measures of saccades during reading and during visual search tasks in a large population of normal readers aged from 6 to 15 years, and compare these results with those from a group of adults. Our driving hypothesis, in line with Cornelissen et al. [11] and Blythe et al. [12] and with our previous works on reading [13]; [7], is that binocular performance during reading will improve with age. The collected data could also prove useful as a reference for any further studies examining ocular motor development in children with reading difficulties. Subjects Sixty-nine children (aged 6 to 15) and 10 adults (aged 24 to 39) participated in the study. For an easier presentation of their clinical and visual characteristics, participants were divided into five groups of children depending on their age and scholastic level: 15 children aged 6-7 years (mean age: 7.060.1, first grade of French primary school); 15 children aged 8-9 years (mean age: 8.560.1, second and third grade of French primary school); 16 children aged 10-11 years (mean age: 10.860.1, fifth and sixth grades of French school); 11 children aged 12-13 years (mean age: 13.0 6 0.1, seventh and eighth grade of French secondary school); 12 children aged 14-15 years (mean age: 14.560.1, ninth grade of French secondary school), and one group of adults. An ANOVA performed on the mean age showed that the groups were significantly different from each other (F (5,73) = 262.84, p,0.0001). Participants had to satisfy the following criteria to be included in the study: no known neurological or psychiatric history, no history of reading difficulty, no visual impairment or difficulty with near vision. Children underwent both the similarity test of the WISC IV (assessing verbal capability by abstracting criteria common to two objects and by excluding differences) and the matrix test of the WISC IV (assessing logic capability). All children tested had normal verbal (10.460.4) and logic (11.960.5) capabilities (normal range is 10 6 3, as reported in Wechsler intelligence scale for children fourth edition, 2004). Participants underwent both a sensorial and motor ophthalmologic examination (mean values showed in Table 1). All participants had normal binocular vision (mean value of 51 s of arc), which was evaluated with the TNO random dot test. Visual acuity was normal ($20/20) for all participants. The near point of convergence was normal for all participants (mean value of 2 cm). Heterophoria at near distance (i.e. latent deviation of one eye when the other eye is covered, using the cover-uncover test) was normal for all children tested (# exophoria of 3.5 prism D). Moreover, an evaluation of vergence fusion capability using prisms was done at near distance. The divergence and convergence amplitudes were also normal for all participants. The investigation adhered to the principles of the Declaration of Helsinki and was approved by the Institutional Human Experimentation Committee (CPP Ile de France I, Hôpital Hotel-Dieu). Written consent was obtained from the children's parents after an explanation of the experimental procedure. Ocular motor paradigms Stimuli were presented on a 22 inches PC screen, with a resolution of 192061080 and a refresh rate of 60 Hz. Although it is well known that intermittent illumination could affect saccade accuracy and visual assessment [14], this refresh rate was sufficient to assure a normal saccade performance. The reading and visual search tasks are similar to those used by Bucci et al. [13] and are described below. Reading. Subjects were asked to read a text of four lines from a children's book. The paragraph contained 40 words and 174 characters. The text was 29u wide and 6.4u high; mean character width was 0.5u and the text was written in black ''courier'' font on a white background. Each age group had to read a different text. Figure 1 show each of these texts: an extract from ''Jojo Lapin fait des farces'', Gnid Bulton, Hachette Ed., for 6 to 9 year-olds ( Participants were asked to read the text silently. When they were finished, they raised a finger and were asked to describe the text. This allowed the researchers to check that the text had been read and understood. The texts used were from three different books that are frequently used by French teachers in different class levels (7-9, 10-12, and over 13 years old). As said in our previous work [15] we chose these age-specific texts to ensure that all words were wellknown and easily understood by the children. Note that after the task was completed, the researcher asked the child a few questions in order to verify that he/she read the text and understood it. Visual search. This task used the same texts as in the reading task, with one crucial difference: all vowels were replaced by consonants (see Figure 1D, 1E, 1F for the texts for 6 to 9 year-olds, 10 to 12 year-olds and 13 to15 year-olds, respectively). Children were asked to count silently the number of 'r's occurring in the text. When they were done, they raised a finger and were asked to report this number to the researcher. Table 2 shows the percentage of the number of 'r's counted in the text by each group of participants tested, and the corresponding post-hoc group comparisons. In both tasks, stimuli were presented without time limitations. The recording of each task stopped when the child raised one finger to indicate that they were finished reading/counting. Eye movement recordings Eye movements were recorded with the Mobile Eyebrain Tracker (Mobile EBTH, e(ye)BRAIN, www.eye-brain.com), an eye-tracking device CE marked for medical purposes. The Mobile EBTH benefits from a high frequency camera that allows it to record both the horizontal and vertical eye positions independently and simultaneously for each eye. Recording frequency was set up to 300 Hz. The precision of this system is typically 0.5u.In a controlled setting as in the one we used, it reaches 0.25u (see www. eye-brain.com). The recording system does not obstruct the visual field and the calibrated zone covers a horizontal visual angle of 6 22u (see Lions et al., 2013). Procedure Children were seated in a chair in a dark room and used a headrest to avoid any head movement. Viewing was binocular with a viewing distance of 60 cm. Calibration was done at the beginning of each eye movement recording, for each eye during binocular viewing. The best calibration could be an haploscopic arrangement [16]. However, it should be noted that binocular vision was normal for all children tested (see stereoacuity scores in Table 1), suggesting that they were fixating targets with both eyes. A previous study from [17] comparing normal and strabismic children confirmed that in the absence of strabismus either type of calibration (under monocular or binocular viewing) was valid. During the calibration procedure, children were asked to fixate a grid of 13 points (diameter 0.5 deg) mapping the screen. Point positions in horizontal/vertical plans were: 220.9u/12.2u; 0u/ 12.2u; 20.9u/12.2u; 210.8u/6.2u; 10.8u/6.2u; 220.9u/0u; 0u/0u; 20.9u/0u; 210.8u/26.2u; 10.8/26.2u; 220.9u/212.2u; 0u/ 212.2u; 20.9u/212.2u. Each calibration point required a fixation of 250 ms to be validated. A polynomial function with five parameters was used to fit the calibration data and to determine the visual angles. After the calibration procedure, the reading or visual search tasks were presented to the child. Duration of each task was kept short (lasting a couple of seconds) to avoid any head movements and to ensure an accurate measurement of eye movements (see for details [15]). Data analysis An ANOVA was performed with the six age groups as intersubject factor and clinical orthoptic values as within subject factors. For ocular motor data, calibration factors for each eye were determined from the eye positions during the calibration procedure. The software MeyeAnalysis (provided with the eye tracker, e(ye)BRAIN, www.eye-brain.com, France) was used to extract saccadic eye movements from the data. It automatically determines the onset and the end of each saccade by using a builtin saccade detection algorithm. The algorithm used to detect saccades is adapted from [18]. All saccades with an amplitude superior to 1 degree were detected. All detected saccades were checked by the researcher and corrected/discarded if necessary. The number and the amplitude of progressive saccades (prosaccades, from left to right) and regressive saccades (backward saccades, from right to left) and the duration of fixations between each saccade were analyzed. The time to perform each task was also analyzed and was determined by the delay between the first and the last saccade. In both tasks (reading and visual search), binocular coordination was defined for each saccade and each fixation was recorded. We examined the amplitude of the disconjugate components during each saccade (left eye -right eye). The disconjugacy was measured as the change in vergence between the beginning and the end of each saccade. We also examined the disconjugate component of each post-saccadic fixation period (see [13]). Data were analyzed using different multiple linear regression models using the number of saccades, the amplitude of saccades (in degrees), the duration of fixations (in ms) and the duration of task (in seconds). Given that saccade disconjugacy depends on the saccade amplitude, the values of disconjugacy during and after the saccades were presented as the ratio of the disconjugacy on the saccade amplitude (in percentage). The predictor variable for each test was the participant's age (in year and months). Linear regressions were performed for children only, and presented on corresponding graphs as dotted lines. We also measured the correlation coefficient between the saccadic disconjugacy and the post-saccadic fixation disconjugacy. Finally, ANOVAs were performed with the six age groups as inter-subject factor and the type of task (reading vs visual search) as within subject factor. We considered the effect of a factor to be significant when the p-value was below 0.05. Results Eye movement pattern during reading and visual search Number of saccades. Figure 2 shows the number of progressive and regressive saccades assessed during reading (2A & 2B) and visual search (2C & 2D) as a function of age for each participant examined, and the regression line observed in each case. There was a significant effect of age in the reading task: the number of saccades decreased as age increased (R 2 = 0.32, p,0.0001 and R 2 = 0.21, p,0.001, respectively for the progressive and regressive saccades). There was also a significant effect of age on the visual search task as the number of progressive saccades (R 2 = 0.07, p,0.02) and the number of regressive saccades (R 2 = 0.09, p,0.006) decreased with age. Amplitude of saccades. Figure 3 shows the mean amplitude of saccades (progressive and regressive) assessed during reading and visual search tasks for each participants. There was a significant effect of age: the amplitude of progressive saccades increased with age in the reading task (R 2 = 0.17, p,0.001) but not in the visual search task (R 2 = 0.0002, p = 0.90). We found no effect of age on the amplitude of regressive saccades, neither in the reading task (R 2 = 0.008, p = 0.46) nor in the visual search task (R 2 = 0.05, p = 0.06). Duration of fixations. In order to better understand the participants fixation behaviour, we also measured the average duration of fixations, which is the time period between two saccades ( Figure 4A for reading and 4B for visual search). We found a significant effect of age on the duration of fixations, which decreased with age in both tasks (reading: R 2 = 0.22, p,0.0001; visual search: R 2 = 0.30, p,0.0001). Total task duration. We measured the period between the first saccade and the last fixation period, or total task duration. Figure 4C and 4D show this duration assessed for every participant in the reading task and visual search task respectively. We found a significant effect of age in both the reading task (R 2 = 0.23, p,0.0001) and the visual search task (R 2 = 0.32, p,0.0001). In both cases, the mean task duration decreased as age increased. Binocular Coordination during reading and visual search Disconjugacy during the saccades. Figure 5 shows the disconjugacy observed during saccades. For both tasks we found a significant effect of age on disconjugacy in progressive saccades (R 2 = 0.07, p,0.02 and R 2 = 0.06 p,0.03 respectively for reading and visual search task) but not in regressive saccades (R 2 = 0.0001, p = 0.98 and R 2 = 0.002, p = 0.74 respectively for reading and visual search): the disconjugacy of progressive saccades decreased with age. Disconjugacy of post-saccadic fixation period. The values of disconjugacy measured during the post-saccadic fixation period are shown in Figure 6. For both tasks we found a significant effect of age (R 2 = 0.12, p,0.002 and R 2 = 0.05, p,0.047 respectively for reading and visual search): the disconjugacy of post-saccadic fixation period decreased with age. Sign of disconjugacy. In Figure 7, the saccadic disconjugacy is plotted versus the post-saccadic fixation disconjugacy for each saccade of each participant examined in the reading task. We found a significantly negative Pearson's correlation in reading (r = 20.06; p,0.001), indicating that the disconjugacy of the saccades, which is divergent for the majority of the cases, is followed by convergent disconjugacy during the post-saccadic fixation period, thus reducing binocular disparity. Type of Task: Reading versus visual search We focused on the development of automatic processing in reading by comparing the reading task and the visual search task. Children were plotted by age and school levels: 6-7 years (first grade); 8-9 years (second and third grade); 10-11 years (fifth and sixth grade); 12-13 years (seventh and eighth grade); 14-15 years (ninth grade); and compared to a group of normal adults (aged between 24 and 39). Number of progressive and regressive saccades ( Figure 8A and 8B). We found a significant task effect, with more saccades observed in the visual search task than in the reading task (F (1,72) = 130.57, p,0.0001) and a significant interaction between task and group (F (4,72) = 10.89, p,0.0001). Post hoc comparisons showed no difference between the tasks in the 6-7 year-old and 8-9 year-old groups. For all other groups of children and adults, the number of progressive saccades was smaller in reading than in the visual search task (all ps,0.0001). The number of regressive saccades was smaller in the reading task than in the visual search task for children aged between 10 and 15 years (all ps,0.002) but there was no difference in adults (p = 0.08). Amplitude of progressive saccades ( Figure 8C). Given that regressive saccades were not age-dependent (see figure 3), we only focused on the amplitude of progressive saccades. We found a significant task effect, (F (1,72) = 121.84, p,0.0001), with smaller saccade amplitudes in the visual search task than in the reading task and a significant interaction between task and group (F (5,72) = 22.11, p,0.0001). Post hoc comparisons showed no difference between both tasks in the 6-7 year-old and 8-9 year-old groups. For all groups of older children and adults, the amplitude of progressive saccades was larger in reading compared to the visual search task (all ps,0.001). Duration of fixations ( Figure 8D). There was a significant effect of task on the fixation durations (F (1,73) = 116.54, p,0.0001), with longer durations of fixations in the visual search task than in the reading task. We also found a significant interaction between task and group (F (5,73) = 4.93, p,0.001). Post hoc comparisons showed no difference between reading and visual search for the 6-7 year-old group (p = 0.47) and fixations were shorter in reading than in visual search (all ps,0.001) for the four other groups of children and adults. Task duration. We found a significant task effect (F (1,72) = 167.95, p,0.001) with the duration of fixations as the visual search took longer to complete than reading. The interaction between task and group was also significant (F (5,72) = 6.56, p,0.001). Post hoc comparisons showed no difference between reading and visual search for the 6-7 yearold group (p = 0.17), while task duration in reading was shorter compared to visual search (all ps,0.001) for the four other groups of children and adults, In summary, comparisons between the two tasks showed that between 6 and 9 years of age, the reading task is performed in a very similar fashion to the visual search task. After 10-11 years of age, children are significantly more accurate and faster in reading than in visual search, as with adults. Discussion The aim of the study was to further explore the binocular coordination of saccades while reading a text and and while performing a visual search task in a large population of children and compare these results to those obtained from a group of normal adults. The most important findings are discussed below. Saccade and fixation duration characteristics during reading Our findings concerning binocular data on the number of saccades and fixation duration during reading are in line with findings previously reported on ocular motor behaviour from McConkie et al. [3] and more recently from Blythe et al. [12] showing that children's reading skills develop with age. The present study on a large population of children showed that during reading young children have smaller saccades, frequent regressive saccades and that their fixations are longer. This ocular motor behaviour is typically observed in children learning to read and for whom reading skills are still immature. With age, children's reading capabilities improve and they learn to read by making larger progressive saccades, fewer regressive saccades and shorter fixations; reading a text for older children becomes a simple and quite rapid task similar to adults. The improvement of reading skills could be due to cortical development. Luna, Velanova, & Geier [19] reported that the activity of some cortical areas involved in saccadic eye movements (e.g. frontal and parietal cortex) is lower in young children than in adults and increases until adolescence. Furthermore, temporal and parietal structures are involved in linguistic processes and they develop during childhood [20]; [21]. A recent fMRI study from Olulade et al. [22] reported differences between children and adults during word processing in the anterior left occipitotemporal cortex, providing evidence of developmental course of those regions. Our findings in reading are in line with this developmental hypothesis. Indeed, both the number of saccades (progressive and regressive), the duration of fixations, and the duration of the task decrease with age independently of the task, suggesting an improvement in the performance of saccades. Note, however, that brain imaging studies during reading in a large population of children will need to explore further such issue. Disconjugacy during and after saccades in reading and visual search task This study shows that disconjugacy measured during and immediately after the saccade (during the post-saccadic fixation period) decreases with age. This finding is in line with previous studies from our group [7] in normal 7 year-olds and more recently from [13] comparing binocular saccade coordination in normal and dyslexic children during reading. Fioravanti, Inchingolo, Pensiero & Spanio [8] were the first to show that saccades were highly discoordinated in young children and that such disconjugacy decreased with age reaching adult values at about 11-12 years. Fioravanti's study [8] was run on a small number of subjects (6 children) and measured horizontal saccades to target-LEDs at a relatively far viewing distance (1 m). Despite different experimental conditions, our results are in line with Fiovaranti et al's [8], suggesting that central ocular motor learning mechanisms responsible for saccade yoking are still immature in young children and that they develop with training and visual experience. The binocular coordination of saccades in older children (.10 years old) becomes similar to those reported in adult subjects. Our data is also in line with Blythe et al. [12] who compared the binocular coordination of saccades during a reading task in twelve children (7 to 11 years) and twelve young adults (18 to 21 years). Based on these studies we can make some hypothesis on how and where good quality binocular coordination is reached in humans. According to Lewis et al. [23], we can hypthosize that the fine control of binocular saccade coordination is based on an efficient relationship between the motor command of the saccades and the vergence subsystems at the premotor level. This hypothesis is based on studies from our group [13]; [15]; [24] of different types of child populations showing poor vergence fusional capabilities (i.e., dyslexic children, children with strabismus and children with vergence insufficiency). Indeed, recall that during the reading task (an activity done at near distance) a correct convergence command strictly linked with the saccade command is needed in order to adjust the visual axes of the two eyes at the distance of the word for appropriate fusion of the two retinal images. All child populations with poor vergence capabilities (as those previously cited) showed poor binocular saccade control. This hypothesis, however, needs further exploration by testing the quality of binocular saccade coordination before and after orthoptic vergence training. Another important aspect of binocular saccade coordination is the disconjugacy before and after the saccades. Blythe et al. [12] reported that fixations in children were more frequently divergent while adults showed more frequent convergent fixations during the post-saccadic period. It is well known that in order to decrease or eliminate the divergent disconjugacy during the saccades, adults make convergent post-saccadic fixations (see [25]). Our data on the correlation between the saccade disconjugacy and the post-saccadic fixation disconjugacy shows that in most cases the divergent saccade disconjugacy is reduced afterwards by convergent post-saccadic fixations. This is the normal pattern due to the abducting-adducting eye asymmetry reported by Collewjin et al. [25] in adult subjects. The origin of such a stereotypical pattern in adults is still not clear and Collewjin and collaborators did not exclude a central or peripheral origin for this mechanism. Collewijn, Erkelens, & Steinman [26] suggested that the divergent saccade disconjugacy could be a useful strategy in the natural environment to respond quickly with a divergence during an eye movement. Our data, in contrast with Blythe et al. [12], shows that this ability is already present in 6 year-olds. It is most likely that this mechanism does not need adaptation to visual experience to work correctly and, for this reason it could have more peripheral than central/cortical origin. Reading versus visual search task The reading and the visual search tasks had different demands on visuo-perceptual, attentional and spatial processing. Consequently one could expect to observe different ocular motor behaviour in these two tasks. This was the case except for the disconjugacy measured after the saccades. The data on the number, amplitude of saccades and duration of fixations showed significant differences between the different groups of children and adults. In general we can say that younger children (6-7 and 8-9 year old groups) display similar ocular motor behaviours in both tasks while older children similar to adults do not. We suggest that when reading capabilities are not yet developed children perform both tasks in a similar way. In contrast, when children have better developed reading skills, they accomplish both tasks differently, which is reflected in their ocular motor behaviour. In older children, the number and the duration of the fixations are different in both tasks because the underlying cognitive processes are different. Indeed, in a visual search task, children are asked to identify and count a single target (r), and have to inspect each letter closely, whereas in the reading task they can easily skip letters without disrupting their reading (skipping letters being part of a well-developed reading ability). For these reasons, for older children as with adults, reading becomes an easier and faster task to perform than a visual search. Finally, we have to point out that we did not observe any difference between both tasks in terms of the quality of binocular coordination of saccades. This data confirms and expands our previous work in relation to normal as well as dyslexic children (Bucci et al. [13]) showing that reading a text does not interfere with the quality of binocular coordination. However, the present study is in contrast with that of Heller and Radach [27]. Heller and Radach [27] reported that adult subjects showed a large disconjgacy during the post-saccadic fixation period when they read a text. The authors advanced the hypothesis that the material of the reading task influences the binocular coordination of saccades. They concluded that binocular motor control while reading normal text is poor, most likely because the semantic process is easier with normal text and can be achieved even in the absence of perfect binocular motor control. In line with this hypothesis Kirkby, Blythe, Drieghe, & Liversedge [28] reported poor binocular coordination in dyslexic children during reading but not during a non-linguistic scanning task, suggesting that reading processing difficulties associated to reduced engaged attention, could affect binocular coordination in dyslexia. Conclusion and Futures Studies In summary, the data reported here suggests an immaturity of the binocular coordination of saccades during reading as well as during visual search tasks in the youngest children. At 6 or 7 years of age, an age at which children start to learn to read, not only are reading skills immature (as previous studies had shown, see review of Rayner [5]) but saccades from the two eyes are also unyoked. We report this disconjugacy during and after the saccades both in reading and in a visual search task. In other words, we do not observe a change in the properties of binocular coordination depending on the type of task. Further studies exploring reading and other visual tasks in which linguistic processes do not occur (for example scanning of simple dot stimuli as used by Kirkby, Blythe, Benson, & Liversedge [29]) in large child populations will be useful to better understand the characteristics of binocular saccade coordination. Acknowledgments Authors thank the directors and the teachers of the Collège Saint André (Saint Maur des Fossés) and of the elementary school Lamazou (Paris) for allowing ocular motor tests; parents and children for their kind participation; Naziha Nassibi, orthoptist, for visual examinations of children; and Anna Seassau for revising the English version of the manuscript.
7,700
2013-07-19T00:00:00.000
[ "Psychology", "Biology" ]
Simple encoding of higher derivative gauge and gravity counterterms Invoking increasingly higher dimension operators to encode novel UV physics in effective gauge and gravity theories traditionally means working with increasingly more finicky and difficult expressions. We demonstrate that local higher derivative supersymmetric-compatible operators at four-points can be absorbed into simpler higher-derivative corrections to scalar theories, which generate the predictions of Yang-Mills and Gravity operators by suitable replacements of color-weights with color-dual kinematic weights as per Bern-Carrasco-Johansson double-copy. We exploit that Jacobi-satisfying representations can be composed out of other Jacobi-satisfying representations, and show that at four-points only a small number of building blocks are required to generate the predictions of higher-derivative operators. We find that this construction saturates the higher-derivative operators contributing to the four-point supersymmetric open and closed-string tree amplitudes, presenting a novel representation of the four-point supersymmetric open string making this structure manifest, as well as identifying the only four additional gauge-invariant building blocks required to saturate the four-point bosonic open string. Gravitational quantum scattering amplitudes-the invariant quantum evolution of what distance means in space and time, consistent in the classical limit with Einstein's General Relativity (GR)-are much simpler than expected. This simplicity can be traced to the fact that their perturbative dynamics are completely encoded through a double-copy structure [1][2][3] in the predictions of much simpler gluonic or gauge theories. In turn, these gauge theory predictions are strongly constrained by a similar structure relating kinematics and color-weight, entirely hidden in any standard ways of writing their actions. While Yang-Mills (YM) theory is famously renormalizable in four dimensions, it ceases to be in higherdimensions, requiring a completion in the UV. It is currently an open question as to whether any fourdimensional (pointlike) quantum field theory of gravity is perturbatively finite. The most promising case, maximally supersymmetric supergravity, is a subject of much current research exploiting double-copy[2, [4][5][6][7][8][9][10][11][12][13][14]. Independent of perturbative finiteness, it is very possible that new physics in the UV could necessitate higher-order corrections, whose predictions using traditional methods are often exhaustive to produce. Our main result is that the predictions due to higher-derivative local gauge and gravity operators -encapsulating novel UV physics -can also be incredibly simple because of this very same doublecopy structure. Recent work has shown that at tree-level both the supersymmetric and bosonic open string amplitudes admit field theory double-copy descriptions [15][16][17][18][19][20], pulling the higher-derivative corrections to a putative effective scalar bi-colored theory, encapsulating all order α ′ corrections, called Z-theory. Inspired by the existence of Z-theory amplitudes, as a proof of concept, here we consider a bootstrap approach, asking simply what predictions are consistent with unitarity, double-copy structure, gauge invariance, and locality. We find that all tree-level string corrections to supersymmetric YM and GR at four-points follow from simple field theory considerations. These corrections can be obtained through a simple composition rule that combines color-dual numerators into more complex numerators with the same algebraic properties, promoting colorweights to carry the higher-derivative corrections. This naturally introduces a new type of numerator, mixing color and kinematic factors to satisfy adjoint-type relations in concert. One might expect many possibilities even at four-points, yet we find only three distinct color building blocks. We see that they generate all four-point single-trace gauge-theory predictions consistent with maximal supersymmetry. Concordant corrections to maximal supergravity are even simpler, requiring only permutation invariant kinematic factors. These considerations only specify the analytic form of higher-derivative corrections. One may choose to fix their coefficients by assuming the asymptotic uniqueness of the Veneziano amplitude (c.f. Ref. [21][22][23]). Similar ideas apply to the open bosonic string, where just five different gauge invariant building blocks, dressed with the same simple modified color factors, are sufficient to generate the full low energy expansion of YM + (DF ) 2 theory [20]. This discussion complements and explains results noted in Ref. [24], demonstrating explicitly through coefficient matching that the low energy four-field effective actions of super and bosonic strings, governed by Z-theory, are highly constrained by field theory color-kinematics duality. I. STRIATING BY f abc STRUCTURES. We will briefly review adjoint-type color-kinematic representations at four-points. We refer the interested reader to Ref. [3] for a detailed treatment. Yang-Mills amplitudes can be expressed in terms of cubic (trivalent) where s, t, u are four-point momentum invariants following an all outgoing convention as s = s 12 = (k 1 + k 2 ) 2 , t = t 23 = (k 2 + k 3 ) 2 , and u = −s − t = s 13 = k 1 + k 3 ) 2 . The color-weights c g are simple combinations of adjoint color-generators and the kinematic weights n g are Lorentz products between external momenta and polarization vectors. We emphasize that both the color weights and the kinematic weights satisfy Jacobi identities and antisymmetry around vertex flips: As such this is called a color-dual representation of Yang-Mills, specifically manifesting an adjoint-type doublecopy structure. We will parameterize such adjoint type graph weights c g and n g in terms of the the three Mandelstam invariants as follows, The pattern to recognize is j(a|bc|d) = j(s ab , s bc , s cd ). Gauge invariance is maintained by the fact that the color-weights, c g , satisfy anti-symmetry and Jacobi identities. As per double-copy construction, we can replace the color weights with kinematic weights that also satisfy Jacobi identities and anti-symmetry to generate gravity amplitudes invariant under linearized diffeomorphism: Details of state identification for a variety of (super)gravity theories can be found in Ref. [3], but the important point to realize is that the kinematic weights n g and n g need not come from the same theory, and indeed the double-copy construction promotes any global supersymmetry of the kinematic weights into a local supersymmetry of the gravitational amplitude. It will simplify our discussion to introduce the notion of gauge-invariant ordered amplitudes. Let us cast the color-weights in Equation (1) to a minimal basis using the relations in Equation (2), say by eliminating c t and collecting in terms of c s and c u . This results in collections of gauge invariant kinematic terms, called ordered or partial amplitudes, with distinct color-basis prefactors: = c s A YM (s, t) + c u A YM (u, t) . As the A YM (s ab , s bc ) = A YM (a, b, c, d) = n(a|bc|d)/s ab + n(d|ab|c)/s da appear in the full amplitude with coefficients that are independent color basis elements, they must themselves be individually gauge invariant. Expressing these ordered amplitudes in a basis of kinematic weights n g , say by eliminating n u via Equation (2), demonstrates that the distinct color orders are intimately related. Indeed one immediately identifies the permutation invariant quantity: with the identification of the permutation invariant product stA YM (s, t) = (stu)A YM (s, t)/u a simple consequence. This is the lowest multiplicity manifestation of the (n − 3)! (or the so called BCJ) ordered-amplitude relations [1]. We will first be concerned with how we can express higher-derivative corrections to Yang-Mills by only modifying the color-weights in a manner consistent with this adjoint-type structure. Let us now introduce the notion of a Jacobi identity satisfying composition. If we have functional maps j(a, b, c) and k(a, b, c) that satisfy Jacobi identities (X s = X t + X u ) and antisymmetry X(a, b, c) = −X(a, c, b), then we can define a new antisymmetric and Jacobi-satisfying representation n s as a composition of j and k by At four points, it is natural to ask if there exists a scalar color-dual function that is only linear in the Mandelstam invariants. Indeed one does, which we will refer to as the simple scalar numerator: This corresponds to a scalar charged in the adjoint mediated by a massless vector, e.g. with interaction term f abc A µ (∂ µ φ)φ. What happens when we compose the simple scalar with itself? We find the Jacobi-satisfying kinematic numerator associated with the NLSM, j nl s = s(u − t) = sj ss s ∝ J (j ss , j ss ) . Any further compositions between j ss and j nl only differ from these numerators by appropriate powers of permutation invariant combinations of the Mandelstam invariants, (s 2 + t 2 + u 2 ) and (stu). It is perhaps not surprising that a gauge-invariant color-dual kinematic numerator representation for Yang-Mills can be written [25] as: The most straightforward modification of the colorweights that preserves anti-symmetry and Jacobi involves simple products of permutation invariant scalar combinations: where we introduce a dimensional parameter α ′ to track mass-dimension. This results in an ordered s-t channel scattering contribution proportional to: As all such modifications result in manifestly permutation invariant scalings of the bi-adjoint ordered amplitude, all field theory relations are automatically preserved. One might be surprised by the appearance of the simple scalar numerator appearing in the expression above, but recall that stA(s, t) must be permutation invariant for 4-point ordered amplitudes that satisfy the (n − 3)! BCJ identities. A natural way of generating permutation invariants given adjoint-type structures like c g is to take a sum over products with other adjoint-type structures. It is straightforward to see that yielding the perhaps more familiar expression for stA bi−adj (s, t). This trivial modification is not the only consistent modification to adjoint color weights. We are free to consider terms that compose scalar kinematic weights and adjoint color-weights to result in consistent modifications. Let us first explore composition with the simple scalar numerators, j ss , Such weights result in an ordered (s, t) channel scattering contribution proportional to: These are completely valid adjoint-type partial amplitudes-satisfying both KK and BCJ relationsthat are quite distinct from those given in Equation (14). Consider now compositions between adjoint colorweights and NLSM numerators, J (c, j nl ). It is clear they do not represent additional operators, being completely redundant with the amplitudes given byĉ X,Y +1 g in Equation (14) for on-shell four-point amplitudes. In all of the above, for local four-field operators, we have a restriction on the power of the permutations symmetric (stu) term: we must require X ≥ 1 to avoid any cubic propagators in the resulting amplitude. It turns out that with the two distinct building blocks, c X,Y andĉ X,Y,ss , we can build any higher derivative fourpoint amplitude A HD that will involve c s , c t , and c u and that satisfies the (n − 2)! and (n − 3)! field theory relations. Namely stA HD must be permutation invariant in all channels. Such a permutation invariant function of the adjoint color-weights c g and Mandelstam invariants, can always be written in terms of a crossing-symmetric adjoint-type polynomial function in Mandelstam invariants j(a, b, c) as follows: Any such j s = j(s, t, u) can be written as a superposition of simple-scalar and NLSM numerators, schematically, using the following general decomposition, a fact easily verified by recalling the definitions of j ss (s, t, u) = (u − t) and j nl (s, t, u) = s(u − t). What is particularly notable is that their coefficients in Equation (18) are each permutation invariant under all S 3 (s, t, u) by virtue of the adjoint-type properties of j(a, b, c). One might be concerned about potential poles, but it is straightforward to see that both must be local expressions. The simplest argument is to realize b = c is always a zero of the polynomial j(a, b, c) by virtue of antisymmetry, and thus (b − c) must be a factor of j(a, b, c). taking care of all divisors except for (s − t). But s = t is manifestly a zero of each numerator in these expressions, and thus the remaining divisor (s − t) must be a factor of both. In summary we see that our two building blocks can reproduce every scalar polynomial adjoint-type numerator involving c s , c t , and c u . We have not yet exhausted all potential local operators. Namely, we have not yet considered the possibility that the color-weight information may not be in the adjoint, and could itself be permutation invariant, as per the symmetric symbol: This could be dressed with color-dual scalar weights and permutation symmetric kinematics to generate the predictions of additional distinct operators. Due to redundancy between building blocks, we need only consider adding to our repertoire of globally consistent building blocks at four-points the contributions of scalar weights of the non-linear sigma model, where, due to the propagator canceling prefactor in every j nl g , we are now free to include the cases where X ≥ 0. These building blocks result in (s, t) ordered scattering amplitudes proportional to: again manifestly satisfying the usual field-theory relations by construction. Putatively distinct weights proportional to the simple scalar numerator, can be seen to be redundant, building equivalent amplitudes to those generated fromĉ in Equation (22). With only three building blocks:ĉ we have exhausted all single-trace higherderivative modifications of color-weight, and so we find that the generic form of such single-trace higherderivative corrections to Yang-Mills to be encapsulated by: where X, Y are integers, and the a i are free parameters encoding distinct operator Wilson-coefficients. All local higher-derivative SUSY-compatible gauge corrections to the four point tree-level amplitude, consistent with adjoint representations, will be given by suchĉ simply as: In Tab. I we provide corresponding higher derivative scalar and gauge operators associated with the variouŝ A through mass dimension four. The supersymmetric open string is a known UV completion to (super) Yang-Mills. It is a fair question to ask whether our simple color-modified building blocks for higher-derivative amplitudes are sufficient to capture the open superstring low-energy expansion [18,26,27]. We will see that the answer is yes, but will delay this discussion until after we have introduced the permutation invariant striation in the next section. We have only, thus far, modified color-weights. As global supersymmetry is satisfied by the unmodified Yang-Mills kinematic weights, this exhausts a discussion consistent with the global supersymmetry inherent in YM amplitudes at tree-level. Composition between the above scalar weights and kinematic Yang-Mills weights always satisfies Jacobi, but it is easy to see that the only composition that maintains gauge invariance is redundant with the trivial modification of color-weights with permutation invariant prefactors that we have already considered in Equation (13). What about non-supersymmetric operators that can be applied to gauge theory? The same discussion carries through, essentially unchanged, by replacing n Y M with other gauge invariant 4-point adjoint color-dual graph weights not trivially related to n Y M , such as the n F 3 numerator weights identified in Ref. [27], or more broadly with the type of n (DF ) 2 weights responsible for the ordered amplitudes defined in the context of the bosonic open string as per Ref. [16,20]. We will return to this discussion in a later section discussing the open bosonic string where we make it clear that only four additional building blocks are required. Next let us consider counterterms to gravity consistent with an adjoint double-copy representation. From a color-kinematic perspective, it is natural to consider replacing the color weights with Yang-Mills kinematic weights. Let us first treat the familiar replacements of the f abc based color weights c g → n YM g , so that and similarly forn (X,Y,ss) s . In the case ofn (X,Y ) s we encounter no obstacle in the resulting ordered amplitudes; indeed, quite simply one finds A bi−adj (s, t) → A YM (s, t) in Equation (14). Gauge-invariance, however, immediately excludes the amplitudes generated fromn (X,Y,ss) s , for essentially the same reason that we could only include trivial (permutation invariant) higher-derivative modifications to n YM . We are left only with the question as to what permutation invariant quantity to replace d abcd with inĉ (X,Y,d,nl) s to generate gravity amplitudes without introducing unphysical poles. There are two distinct candidates: stA YM (s, t) and A YM (s, t)/u. It turns out that both choices are redundant with the amplitudes generated byn (X,Y ) s , leaving us with only the trivial building blockn (X,Y ) s for adjoint higher-derivative operators consistent with local supersymmetry for N > 4 in fourdimensions. As such, we have a simple argument that the only such higher derivative local operators available to gravity at 4-point give predictions simply proportional to the 4-point graviton amplitude: As these modifications amount to simple factors of permutation invariant kinematics, all of these higherdimensional corrections are consistent with local supersymmetric Ward identities. For operators restricted to on-shell local supersymmetry consistent with N ≤ 4, one can also consider similar arguments where at least one copy has the adjoint color-weights replaced with nonsupersymmetric gauge-invariant adjoint-type graph ones. Such an example is F 3 , whose double-copy to gravity was considered in [27,28]. These are of particular interest because of the possibility of removing anomalies in associated supergravity theories [29][30][31]. II. STRIATING BY d abcd STRUCTURES. In addition to admitting an adjoint-type double-copy structure, these corrections admit an alternative decomposition into permutation invariant quantities. This is not the first opportunity to see that a single amplitude may admit multiple distinct double copy descriptions, depending on which algebra color-kinematics duality makes manifest. Indeed, the dimensional reduction of fourdimensional supergravity theories to three-dimensions admits both the adjoint-type double-copy construction of three-dimensional super-Yang-Mills amplitudes, as well as the three-algebra type double-copy construction of BLG amplitudes [32][33][34]. As we see here, the ability to striate along different algebras may be quite general. Consider the full four-point amplitudes for Yang-Mills and Gravity expressed in terms of Jacobi-satisfying weights in Eqs. (1) and (6). Solving for n i in terms of ordered YM amplitudes and for c i in terms of ordered biadjoint scalar amplitudes leads to the manifestly permutation invariant representations of the four-point Yang-Mills and gravity amplitudes as: All elements (stA bi−adj ), (stA YM ), and (stu) are manifestly permutation invariant, and indeed are recognizable as proportional to full four-point amplitudes of the known theories NLSM, Born-Infeld, and Special Galileon, respectively: −A Spec.Gal = stu = stà NLSM (s, t) . We have introducedà NLSM to emphasize fixing the more standard normalization for chiral-pion numerators relative to Equation (12):ñ NLSM g = 1 3 j nl g . One obvious feature of permutation invariant striations is that full amplitudes for theories serve as natural building blocks. It is, for example, clear that the only permutation invariant higher-derivative modification to A (GR) in Equation (29) that does not affect gauge invariance is to simply include products of permutationinvariant scalar functions. The permutation invariant modifications to the gauge theory are generated by promoting stA bi−adj to sums over combinations of the building blocks st (X,Y ) , st (X,Y ) ss , and st (X,Y ) d,nl introduced earlier, and can all be interpreted as the full amplitudes of various higher-derivative corrections. We note in passing that this permutation invariant color-dual discussion admits the following whimsical departures from the typical: "GR∼YM 2 " slogan at four-points: III. STRING AMPLITUDES AT FOUR POINTS. We are now prepared to find our building blocks, resummed over all orders in α ′ , in the tree-level fourpoint open supsersymmetric string amplitude. This can be interpreted as answering a field theory question of how atoms of prediction composed into higher-derivative scalar corrections can be made consistent with a UV completion involving massive spin intermediaries of a particular form. We start by recognizing the open superstring amplitude as the field theory double copy between Chan-Paton dressed Z-theory [15,17] At four-points, this amplitude can be represented in a permutation invariant color-dual form as: where all supersymmetric Ward identities are satisfied by virtue of operations on the Yang-Mills factor stA YM (s, t), and A Z (s, t) is the field-theoretic (s-t) partial amplitude of Chan-Paton dressed Z-theory encoding all orders of α ′ corrections. We can build [stA Z (s, t)] starting from the bi-ordered doubly-stripped partial Z-amplitude Z 1234 (s, u), where the subscript ordering refers the Chan-Paton trace ordering, and the parenthetical ordering obeys field-theory relations, We form the field-theory permutation invariant for this Chan-Paton ordering by simply taking the product: s u Z 1234 (s, u) = s t Z 1234 (s, t). By exploiting monodromy relations to permute the subscript orderings, we can generate the Chan-Paton dressed expression, required in Equation (33), stA Z (s, t), stA Z (s, t) = σ∈S3(2,3,4) 3,4) Tr [1σ] sin(πα ′ s 1,σ(3) ) sin(πα ′ s 1,3 ) (stZ 1234 (s, t)) , (35) where we use Tr [ρ] to denote Tr[T a ρ(1) T a ρ(2) T a ρ(3) T a ρ(4) ]. The above is invariant under exchange of any channels, and by expressing the Chan-Paton trace factors in terms of c s , c t , c u , and d abcd (and explicitly symmetrizing as appropriate), we find the following simple color-dual permutation invariant form for the full Chan-Paton dressed open superstring, where the manifestly permutation symmetric Γ {s,t,u} corresponds to a series of higher mass-dimension combinations of Mandelstam invariants with coefficients responsible for familiar ζ contributions to the low-energy expansion, Z adj contains all expressions involving Chan-Paton trace combinations c s , c t , or c u , and Z sym contains all terms proportional to d abcd . These are given as follows: Note that even Z adj is manifestly permutation invariant, as the z g satisfy anti-symmetry and Jacobi-identities in concordance with c g . The z g take a particularly simple form, with and the rest following from relabeling: z t = z s | s↔t and z u = z s | s↔u . In this form, all the coefficients for c (X,Y,d,nl) s may be easily identified already from the lowenergy expansion of Z sym . The remaining two building blocks only require a little teasing out from Z adj , which may be achieved by using Equation (18) to rewrite the z g . This allows us to separate z s into terms proportional to j ss s = (u − t) , and terms proportional to j nl s = s(u − t), (41) where S p denotes sin(πα ′ p), and the Z bi−adj and Z ss higher derivative corrections are given as follows: The j ss terms can be seen to correspond within Z adj to corrections of the form st X,Y (s, t) (c.f. Equation (14)), and j nl terms are likewise associated with st (X,Y ) ss (s, t) (c.f. Equation (16)). Both Z bi−adj and Z ss are manifestly permutation symmetric and local in all orders of an α ′ → 0 expansion, meaning they are completely spanned at any mass-dimension, MD, by a basis in (stu) X and (s 2 +t 2 +u 2 ) Y such that 3X +2Y = MD as per our building blocks. We have therefore exposed within the 4-point open superstring the three unique Jacobi-identity satisfying modifications to the color-weights of Yang-Mills. This can now be written in terms of stA Z (s, t) = stA bi-adj (s, t)f bi (s, t, u)+ stA ss (s, t)f ss (s, t, u) + d a1a2a3a4 f d (s, t, u) , (44) where the higher-derivative expressions through mass dimension six are given by: We use σ 2 and σ 3 to denote (s 2 + t 2 + u 2 ) and (stu) respectively. The individual a X,Y coefficients that fix Equation (25) to the low-energy expansion of the open superstring up through mass-dimension thirteen are given in supplementary Tab. II, and through mass-dimension sixteen in a machine readable auxiliary Mathematica file. We now turn to the open bosonic string amplitude at four-point tree-level. It was shown in Refs. [16,20] that this amplitude also obeys a field theoretic adjoint-type double-copy description with Z amplitudes as follows: (open bosonic string) = (Z-theory) ⊗ YM + (DF ) 2 , where (DF ) 2 is a massive higher derivative YM theory, compatible with the usual BCJ relations but in violation of supersymmetric Ward identities. It is straightforward to identify that only four new higher mass-dimension gauge-invariant adjoint-type vector building blocks are required to build this amplitude upon dressing with the permutation invariant objects σ 2 and σ 3 , compactly encoded in the following denominator: Machine readable expressions for the four new gaugeinvariant building blocks of these amplitudes are included in an auxiliary Mathematica file. IV. DISCUSSION We have shown that at four-points there are simple building blocks, manifest in two algebraic striations of four-point scattering amplitudes, that encode higher derivative corrections to effective gauge and gravity theories. We have demonstrated that these building blocks can be exposed to all orders in α ′ in the open supersymmetric and bosonic string amplitudes. Preliminary exploration confirms [35] that the pattern of identifying color-dual building blocks that admit composition continues at higher-multiplicity, a topic that merits detailed study. Gaining all-multiplicity control would mean that, through unitarity methods, one could build relatively easy to construct higher loop-order scalar integrands that trivially recycle, through double copy, known gauge and gravity integrands to their higher-derivative corrections. It is noteworthy that, at four-points, compatibility with adjoint double-copy structure involving Yang-Mills building blocks ensures compatibility with supersymmetry. It is worth remarking on a striking fact that Eqs. (21) and (22) make manifest. Consider the SUSY-compatible F 4 amplitude: It was observed [27] that the kinematic factor accompanying individual trace terms, s t A YM (s, t), does not satisfy the (n − 3)! field theory relations associated with adjoint color-kinematic structure. It is possible to misconstrue this result to show that F 4 is incompatible with color-kinematics duality in some broad sense, a question we can address. We should emphasize two points clear now from a perspective informed by many examples [32,[36][37][38] of non-adjoint color-kinematics duality satisfying representations. First, even as written, there is a manifest completely-symmetric color-kinematics duality at work for F 4 : both the color term, d abcd , and the kinematic (Born-Infeld) term, stA YM (s, t), are invariant under all permutations. This seemingly trivial duality even has teeth: there is an associated double-copy construction. Replacing the d abcd term with the permutation invariant kinematic weight stA YM (s, t) generates the gravitational R 4 amplitude consistent with maximal local supersymmetry: with the relationship to the four-graviton scattering amplitude clear from comparison to Equation (29). Second, we learn from Eqs. (21) and (22) that both A F 4 and A R 4 also manifest a non-trivial adjoint colordual double-copy structure at four-points: The key to realizing this adjoint-type color-dual representation is to allow both color and scalar kinematics to conspire to satisfy the adjoint algebraic relations within the same adjoint-type color-dual weight-a lesson driven home top-down by abelian Z-theory [17], and constructively presented here. Not all effective particles are massless, and not all such particles are single-trace in the adjoint (c.f. QCD, Einstein-Yang-Mills, and the standard model more generally), yet many admit color-dual representations [36][37][38][39][40]. It will be fascinating to see if such simple constructive building blocks are available for higher-derivative corrections to their predictions. Even in the adjoint, we have only focused here on structures involving gaugekinematics in at least one copy. Generalizations of these building blocks should be relevant to exploring higher derivative corrections to more phenomenological effective field theories [41][42][43][44].
6,414.8
2019-10-28T00:00:00.000
[ "Physics" ]
Entropy Loading Design for the MIMO-OFDM Visible Light Communication System Using the OCT Precoding Technique In this paper, an orthogonal circulant matrix transform (OCT) precoding technique is proposed to combine with the entropy loading in the multiple-input multiple-output and orthogonal frequency division multiplexing (MIMO-OFDM) visible light communication (VLC) system where the space-time coding (STBC) is chosen for its robustness to the channel correlation. Benefitting from the OCT precoding technique, the uniform signal-to-noise ratio (SNR) among all the subchannels can be achieved. As a result, only one SNR value is required to be fed back, and the same distribution matcher is employed during probabilistic shaping (PS), which means much lower feedback overhead and system complexity than the conventional entropy loading scheme. Experimental results show that the OCT precoding does not cause the system performance loss where the achievable information rate (AIR) of the proposed system is comparable with the conventional system without precoding. With an available bandwidth of ∼25 MHz, the proposed scheme can realize the AIR of 50.75 Mb/s at the expense of 0.45% average forward error correction (FEC) overhead (OH). Introduction Recently, the emergence of "smart home" and the rapid spread of intelligent devices have made a great challenge to the conventional technology of the network. e world is experiencing a profound revolution of access technology called "Anywhere, Anytime" [1]. As a result, new access technologies are urgently required where visible light communication (VLC) using white light-emitting diodes (LEDs) as light sources to transmit the optical signals through the air has received particular attention in both academia and industry [2]. Compared to conventional radio frequency (RF) communication, VLC has numerous advantages such as rich spectrum, safe to human eyes, low power consumption, immune to electromagnetic interference, and so on [3]. High-speed indoor communication is considered as one of the most important applications for VLC systems. However, the modulation bandwidth of the LED is limited ranging from a few megabytes to tens of gigabytes where the channel response is attenuated exponentially with the frequency increasing [4]. To improve the data rate of the VLC system, numerous techniques have been proposed, among which orthogonal frequency division multiplexing (OFDM) [5] and multiple-input multiple-output (MIMO) [6] are considered especially effective to increase the spectral efficiency for VLC systems. By dividing the channel into several subchannels, OFDM is able to overcome the intersymbol interference (ISI) caused by a nonideal LED frequency response under the condition of high-speed transmission. While MIMO has been proposed to increase the data rate or improve the performance without the extra need of frequency resources by equipping multiple antennas at the transmitter and receiver. Since usually several LEDs are required to support sufficient illumination of the room, it is natural to implement the MIMO technique in VLC systems. However, as intensity modulation with direct detection (IM/ DD) is applied in VLC, the value of transmitted signals is real, that is, no phase information can be provided, which leads to a high correlation of the VLC MIMO channel [7]. Consequently, the conventional MIMO scheme based on spatial multiplexing (SMP) cannot be applied successfully in the VLC system, as the bit error rate (BER) performance decreases sharply as soon as the channel is highly correlated. Compared with SMP, MIMO scheme based on transmit diversity has been proved to be robust to the channel correlation, so it is more suitable for VLC systems [8]. However, as the transmit diversity scheme mainly enjoys the advantage of diversity gains, it lacks multiplexing gains. So, how to improve the data rate remains to be studied. Adaptive modulation implemented by bit allocation has been proposed to improve the data rate close to the Shannon capacity in MIMO-OFDM VLC systems [9]. rough adaptive modulation, it has been shown that data rate can be increased without sacrificing the BER performance. Furthermore, some papers mentioned that the bit allocation algorithm can only assign discrete bits to each subchannel, leading to a gap from the channel capacity. Consequently, an entropy loading scheme has been proposed to narrow the gap from the channel capacity [10]. By introducing the probabilistic shaping (PS) quadrature amplitude modulation (QAM) technique [11,12], the uniformly distributed signal source is transformed to Gaussian distribution, and the continuous entropy can be realized in the entropy loading scheme. Moreover, the effect of the nonlinear distortion induced by the LED can also be reduced for the modulated signals with a higher power tend to appear with lower probabilities according to the idea of the PS technique. However, the main disadvantage of the entropy loading scheme is that a large feedback is required, and the complexity is high. Since the frequency response of the LED is attenuated exponentially with the frequency increasing, the signal-to-noise ratios (SNRs) on each subchannel are different. us, it is necessary to feedback all the SNRs from the receiver to the transmitter. Moreover, different distribution matchers are required to generate signal sources with different probabilistic distribution for each subchannel when employing PS. To deal with this problem, a precoding scheme based on orthogonal circulant matrix transform (OCT) [13,14] is proposed to be combined with the entropy loading in the MIMO-OFDM VLC systems in this paper. anks to the equalization effect of OCT, SNRs are almost the same on all the subchannels, which means that only one SNR value is needed to be fed back. Meanwhile, only one distribution matcher is required benefiting from the SNR equalization effect on all subchannels, which maximally reduces the complexity of the entropy loading scheme. Experimental results show that achievable information rate (AIR) of the proposed system is comparable with the conventional system without precoding. With an available bandwidth of ∼25 MHz, the proposed scheme can realize the AIR of 50.75 Mb/s at the expense of 0.45% average forward error correction (FEC) overhead (OH). Operation Principle Under the condition of limited modulation bandwidth, the entropy loading transmission scheme based on PS can effectively narrow the gap with the Shannon limit. In the entropy loading scheme, channel capacity is approached by a Gaussian source where constellation points of M-QAM are assigned different probabilities by a probabilistic distribution matcher. Obviously, SNR has to be known at the transmitter during the calculation of the channel capacity. However, due to the frequency selective characteristics of the LED, the SNRs of different subchannels are also different, which indicate that lots of information have to be fed back. Moreover, different kinds of distribution matchers are required to implement PS on each subchannel. In order to reduce the feedback and the system complexity, we proposed an OCT precoding scheme to combine with the entropy loading. OCT precoding was first proposed [13,14] where preequalization is realized without feeding back channel state information. erefore, we introduce OCT precoding in the entropy loading scheme to equalize the SNRs uniformly over all subchannels. In this way, the feedback overhead and the system complexity can be greatly reduced, that is, only one SNR is required to be fed back, and data source on different subchannels can use the same distribution matcher. Without the loss of generality, consider a MIMO-OFDM VLC system configured with two LEDs as transmitters (TX) and two photoelectric detectors (PDs) as receivers (RX). In our scheme, OCT precoding is jointly employed over all subchannels where the MIMO channel can be equivalent to two decorrelated channels by using space-time block coding (STBC). According to the Hong et al. [14], the OCT matrix, which is constructed by the Zad-off Chu (ZC) sequence [15], can be expressed by where K denotes the subchannel number for each decorrelated, channel and f l (1 ≤ l ≤ 2K) is the corresponding element of the ZC sequence with a length of 2K. en, the precoding signals can be given by where (2), . . . , X i (K)] T denotes the vector of the signals from the i th transmitted data stream (i � 1, 2). As shown, signals on the frequency domain are multiplied by an orthogonal circulant matrix through OCT precoding; then, the information on each subchannel is spread across all subchannels to achieve the frequency diversity. Advances in Condensed Matter Physics Generalized mutual information (GMI), normalized GMI (NGMI), and FEC OH are chosen as the main figures of merit (FOM) to evaluate the performance of the system [16]. GMI quantifies the maximum number of information bits per transmitted symbol after ideal decoding, and the GMI can be obtained by log-likelihood ratios (LLRs) based on Monte Carlo simulations [17]. Assume that the discrete channel input X ′ � X 1 ′ , X 2 ′ , . . . , X N ′ } is independent and identically distributed, and the Y ′ represents the corresponding channel output, in which X i ′ ∈ χ, χ � x 1 , x 2 , . . . , x M and the symbol X ′ consists of m bits levels: where P X′ (x) denotes the corresponding distribution probability mass function, q Y′|X′ (y k | x) is given by { } is the i th bit of the k th transmit symbol, and the χ b k,i is the set of constellation symbols whose i th bit value is b k,i . After the GMI is estimated, the NGMI of the PS-M-QAM source [11], which is used to evaluate the BER performance after the forward error correction (FEC), can be obtained by where H(P X′ ) is the entropy of the constellation. Once the NGMI is calculated, the FEC OH can be determined, which is a threshold for error-free post-FEC results [18,19]. After removing the OH, the AIR of the individual subcarrier can be calculated by Equation (5). And the average of them is the total AIR of the whole system: where B denotes the baud rate, and the OH is given by OH � ((1 − NGMI)/NGMI). System Configuration and Results In this session, an experimental demonstration is set up to study the performance to prove the excellence of the proposed system. In Figure 1, the block diagram and the experimental setup of the proposed VLC system are illustrated. e distance between the transmitter side and the receiver side is 0.8 m. At the transmitter, the source distribution is determined by the channel capacity, which is calculated according to the feedback information of SNR. en, the certain distributed source is PS by the distribution matcher. After QAM modulation and serial-to-parallel (S/P) conversion, the OCT precoding is jointly implemented over all subchannels. Finally, STBC and OFDM are applied to generate the transmitted signals. Noted that signals transmitted by the VLC system must be real-valued, so only half of the subcarriers are used to transmit signals during OFDM modulation, and the other half is used to transmit complex conjugate signals of the above signals. Besides, among the half of the subcarriers, there are six subcarriers (from the 1st subcarrier to the 6th subcarrier) of the low frequency suffering from the severe fading SNR, which may degrade the system performance. As a result, zero-padding is used. In the experiment, the arbitrary function generator (AFG, Tektronix AFG3252 C) is used to generate the transmitted signals at 100 MSa/s. Meanwhile, direct current (DC) offset supplied by AFG is set to ensure that the electrical OFDM signals are positive. en, the mixed signals are transmitted in the form of optical power by the two commercially available LEDs (Cree XLamp XP-E) radiating red light, whose center wavelength is 620 nm and maximum power is 1W. Because the LED is a point light source, a reflection cup with 60°is used to concentrate the light. At the receiver, PDs (Hamamatsu C12702-11, 0.42 A/W responsivity at 620 nm) with 1 mm 2 active area and about 100 MHz bandwidth are used. en, optical signals entering the PDs are converted into electrical signals and amplified by an electrical amplifier (EA) circuit. Finally, electrical signals are collected by a real-time oscilloscope (OSC, Tektronix MDO4104 C) with a sampling rate of 100 MSa/s. e system parameters of our experiments are listed in Table 1. e signal processing at the receiver is the inverse process of the transmitter. After frame synchronization, the signals are first converted to the frequency domain by OFDM demodulation. en, the channel and the noise variance are estimated with the help of the preamble for the sake of signal decoding and the SNR estimation. Since the SNR values of different subcarriers and different receivers are slightly different due to the random noise, the SNR value used for feedback is calculated by averaging the SNR values over different subcarriers and different receivers, that is, only one SNR value is required to be fed back from the receiver to the transmitter. Here, we assume that the transmitter can obtain the exact SNR information. rough STBC decoding and OCT decoding, finally, the binary bit sequence is recovered after M-QAM demapping. To make the comparison fair, the same QAM order is used for the conventional entropy loading scheme and the proposed OCT precoding based on the entropy loading scheme. In Figure 2, the SNR results of the conventional and proposed entropy loading schemes for two receivers are given. As shown, there are slight differences among the SNR values of different receivers because of the random noise. As a result, the SNR curves of two receivers are similar with the same trend. As displayed, the flat SNR curve across the spectral can be obtained in the proposed scheme owing to the application of the OCT precoding. While for the conventional scheme, the SNR fluctuations over subcarriers are more than 10 dB because of the attenuation of channel frequency response. In Figure 3, the experimental results are illustrated when 32-QAM is employed in the system. In the experiment, the BERs of the conventional scheme and the proposed scheme are 8.04 × 10 − 4 and 7.15 × 10 − 4 , respectively, which are both below the 7% the preforward error correction (pre-FEC) threshold of 3.8 × 10 − 3 . As shown in Figure 3(a), the results of the GMI curves agree with the estimated SNR curves where the GMI of the conventional scheme decreases sharply because of the attenuation of channel frequency response, while the GMI of the proposed scheme is uniform across all subcarriers, thanks to the OCT precoding. In Figure 3(b), the results of NGMI are given. It can be seen that NGMI of the conventional scheme is close to 1 on most of the subcarriers. However, the severe fading NGMI of several subcarriers may degrade the total performance. While the NGMI curve of the proposed scheme with OCT precoding is flat just as the GMI curve in Figure 3 Because of the PS technique, the constellation points with lower energy are assigned higher probabilities. In this way, the nonlinear distortion induced by LEDs can also be reduced. Furthermore, the system performance is studied when 64-QAM is used, and the experimental results are shown in Figures 4(a)-4(d). e GMI and NGMI curves are similar to the case of 32-QAM. e BERs of the conventional scheme and the proposed scheme are 0.0021 and 5.12 × 10 − 4 , both of which are a little higher than the results of the 32-QAM case. e results indicate that the nonlinearity of the LED would impact the BER performance as the modulation order grows. In Table 2, the results of AIR performance comparison are listed. When the modulation order is equal to 32, the total AIRs of the conventional and proposed systems are 51.88 Mb/s and 50.75 Mb/s, respectively, and the average FEC OHs are 0.42% and 0.45%. e results indicate that both systems have nearly the same transmission rate and the OH, which proves that the OCT precoding does not cause the performance loss. When 64-QAM is applied to the system, the total AIRs of the conventional and proposed systems are 51.25 Mb/s and 50.65 Mb/s, with the average FEC OHs equal to 1.15% and 0.48%, respectively. Compared with the 32-QAM-based system, the AIRs decrease slightly because more nonlinear distortion tends to occur in the high-order modulation system. Consequently, more OHs are required especially in the conventional system without precoding. Conclusion In this paper, the OCT precoding scheme is proposed to combine with the entropy loading in MIMO-OFDM VLC systems to reduce the system feedback and complexity significantly without performance loss. rough OCT precoding, SNRs among different subchannels are equalized uniformly owing to the advantage of the frequency diversity. As a result, only one SNR value is required to be fed back to the transmitter, and only one distribution matcher is needed in the process of PS, leading to a much lower feedback overhead and system complexity. Finally, an experimental demonstration is set up to evaluate the performance of the proposed system. e experimental results confirm that the AIR of the proposed system is similar to the value of the conventional system without precoding. With an available bandwidth of ∼25 MHz, the proposed scheme can experimentally achieve the AIR of 50.75 Mb/s at the expense of 0.45% average FEC OH. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,009.6
2020-09-19T00:00:00.000
[ "Engineering", "Computer Science" ]
Immunophenotyping of hemocytes from infected Galleria mellonella larvae as an innovative tool for immune profiling, infection studies and drug screening In recent years, there has been a considerable increasing interest in the use of the greater wax moth Galleria mellonella as an animal model. In vivo pharmacological tests, concerning the efficacy and the toxicity of novel compounds are typically performed in mammalian models. However, the use of the latter is costly, laborious and requires ethical approval. In this context, G. mellonella larvae can be considered a valid option due to their greater ease of use and the absence of ethical rules. Furthermore, it has been demonstrated that the immune system of these invertebrates has similarity with the one of mammals, thus guaranteeing the reliability of this in vivo model, mainly in the microbiological field. To better develop the full potential of this model, we present a novel approach to characterize the hemocyte population from G. mellonella larvae and to highlight the immuno modulation upon infection and treatments. Our approach is based on the detection in isolated hemocytes from G. mellonella hemolymph of cell membrane markers typically expressed by human immune cells upon inflammation and infection, for instance CD14, CD44, CD80, CD163 and CD200. This method highlights the analogies between G. mellonella larvae and humans. Furthermore, we provide an innovative tool to perform pre-clinical evaluations of the efficacy of antimicrobial compounds in vivo to further proceed with clinical trials and support drug discovery campaigns. Several approaches based on the use of G. mellonella larvae are currently available and could be summarized into two main areas of study: the kinetics of survival after the testing of an infection-treatment and the host-pathogen interaction.Available techniques to evaluate immunomodulators or characterize hemocytepathogen interaction and the different cell types vary from the hemocytometer count, colorimetric assays, RT-PCR, 2D electrophoresis, fluorescent microscopy, reactive oxygen species or cytokine measurements by ELISA, detection of Annexin V or other apoptosis assays by flow cytometry and many others 6 . The implementation of molecular biology is necessary to obtain more detailed information about the immune profile of hemocytes to disclose all the above-mentioned aspects of the study of G. mellonella 7 .To date, a protocol for an immunophenotypic analysis of the hemocyte population is not available.This study has been designed to develop an innovative approach for the characterization of G. mellonella-derived hemocytes and their immune activation.With this aim, cells from the hemolymph of Staphylococcus aureus-infected larvae with or without Vancomycin were isolated and analyzed by flow cytometry.Cell membrane markers typically expressed on monocytes/macrophages upon inflammation/infection (CD14, CD44, CD200), M1-polarization (CD80) and M2-polarization (CD163) were analyzed in the hemocyte population isolated.The panel of CD markers set up for the analysis of hemocytes was used for the characterization of human monocytes and macrophages as comparison. Infection of G. mellonella larvae with S. aureus At 24 h post infection, the highest inoculum dose tested results in a survival rate of 23% (Fig. 1d), therefore 10 6 CFU/larva was selected for our experiments to mimic the worst scenario of infection.Vancomycin is effective on larval survival in a dose-dependent manner (Fig. 1e).Given these results, 50 mg/kg dose was selected for further experiments to obtain the best conditions of recovery. Cell count The number of undifferentiated monocytes is not affected by treatments except for cells exposed to LPS after 24 h which are weakly but significantly increased compared to cells at T0 (Fig. 2a).Likewise, no fluctuations are registered in the macrophage population other than between untreated and LPS-stimulated cells at T0.In parallel, the number of hemocytes was measured in the established experimental conditions (Fig. 3a).Although the number of cells is not influenced by treatments at T0 and after 3 h, exposures after 24 h are remarkably effective.The amount of hemocytes isolated from S. aureus-infected larvae is significantly lower after 24 h compared to all the experimental conditions at T0 and 3 h whereas the number of cells significantly increase in samples isolated from Vancomycin-exposed larvae. Expression of CD markers in human undifferentiated monocytes and macrophages About 99% of both monocytes and macrophages express CD14 and CD44 in all the experimental conditions (Fig. 2b).CD80, a marker related to the macrophage M1 polarization is instead differentially expressed in both the cell populations and between treatments.While it is almost not expressed in the untreated monocyte population, a time-dependent increase in the percentage of cells CD80-positive is registered in LPS-stimulated cells (6.9% at 3 h and 32.9% at 24 h) (Fig. 2c).In parallel, the increase in CD80 positivity is time-dependent for untreated macrophages and this phenomenon is even more amplified for LPS-stimulated macrophages (15.9% at T0), being present a peak after 24 h of exposure (70.7%).Monocytes do not or only weakly express CD163, a marker related to M2 macrophages (Fig. 2d).Contrariwise, CD163 is upregulated in the macrophage population and the expression augments up to 24 h.In details, a remarkable increase in the positive population is registered after 3 h (about 55%) compared to T0 (about 20%), independently from LPS.After 24 h, the 72.4% of LPS-stimulated macrophages express CD163 while this percentage is comparable to the one registered after 3 h in the untreated control.Again, CD200 is only weakly expressed in monocytes at all the experimental times, being assessed around 5.5% (Fig. 2e).On the other hand, percentages of CD200-positive macrophages increase over the time of the experimental procedure, in a higher extend in LPS-stimulated samples.More specifically, the positivity varies from 33.3% (LPS, T0) to almost 99% after 24 h. Expression of CD markers in G. mellonella-derived hemocytes Hemocytes resuspended in FACS buffer present no signs of cell death and preserve their morphology compared to cells not resuspended in the buffer (Fig. 4a-f).Differences between the two experimental conditions are not remarkable after 30 min, 3 h and 24 h after isolation from larvae.Notably, differences in cell morphology within the whole cell population isolated from larvae is appreciable (Figure 4g, h).In parallel, the expression of CD14, CD44 and CD200 were measured in control groups (Fig. 4i), namely (I) untreated larvae (UC); (II) PBS-injected larvae (PBS); (III) S. aureus 10 6 CFU/larva + PBS (30 min apart); (IV) PBS + V 50 mg/kg (30 min apart); (V) PBS + PBS (30 min apart).Levels of the same CD marker in between experimental conditions of control groups at the same experimental exposure appear to be very similar to the ones registered when larvae are treated with S. aureus and S. aureus + V (Fig. 3).It is thus plausible to assume that a double injection can alter the immune system of hemocytes over the time of the experiment but not as significantly if compared to further treatments. To note, cells between the same isolated population display different cell morphology making plausible to assume that different type of hemocytes are present as reported elsewhere 5 . The immunophenotypic profile of hemocytes isolated from LPS-stimulated G. mellonella larvae (LPS-hem) is shown in Fig. 3b.CD14 positivity is enhanced after 3 and 24 h, whereas CD44 is only weakly expressed.Next, CD80 and CD163 expression is only hardly detectable.Notably, these hemocytes display a significant positivity for the CD200 marker, independently from the time of exposure. The positivity for CD14 in the hemocytes isolated from non-treated larvae (UC-hem) increases over the time of the experimental procedure, starting from 29.5% at T0 up to 53.8% after 24 h (Fig. 3c).In parallel, hemocytes derived from S. aureus-infected larvae (Sa-hem) disclose a similar pattern of positivity, although percentages are significantly higher than in UC.As for hemocytes isolated from larvae infected with S. aureus and subsequently treated with Vancomycin (SaV-hem), the CD14 positivity is found decrease after 3 h compared to Sa-hem (35.1% and 53.2%, respectively).Percentages are comparable after 24 h in all the experimental conditions.About 10% of the hemocyte cell fraction is positive for CD44 at T0 (Fig. 3c).A dramatic increase is registered in the Sa-hem at 3 h (33.7%) and even more at 24 h (50.8%) compared to T0.In parallel, the presence of Vancomycin downregulates CD44 expression (14.7% at 3 h and 41.4% at 24 h).Next, about 20% of the whole UC-hem population is registered positive for CD80 in all the experimental conditions (Fig. 3d and 4g).The absolute higher percentage is registered when G. mellonella larvae are infected by S. aureus after 3 h (28.6%).Values are found comparable after 24 h of exposure.A similar trend is assessed for CD163 cell membrane expression at 3 h (Fig. 3e, g).In parallel, the CD marker is halved after 24 h in the Sa-hem population (16.4%) compared to UC-hem (30.1%) and weakly expressed in the presence of V (25.6%).Finally, CD200 positive cells are considerably high in the untreated population and percentages increase over the time of the experiment (42.6% at T0, 55.6% at 3h and 65.6% after 24 h) (Fig. 3f, g).On the other hand, CD200 percentages in Sa-hem disclose a less proportional time-dependent trend, being significantly increased after 3 h (48.5%) with respect to T0 (28.6%), but equally decreased after 24 h (24.6%).CD200 positivity percentages in this population are maintained significantly lower compared to UC in all the experimental times.In the presence of V, there is a similar trend of positivity at T0 and 3 h, but CD200 positive cells are higher after 24 h (44.9%). Discussion G. mellonella larvae are a widely interesting invertebrate animal model for infectious disease research and, more recently, toxicology.As invertebrates, they own natural advantages over mice and rats for ethical, handling, and cost reasons.However, Galleria has serious limitations.It lacks many of the complex organ systems found in mammals, and like all other invertebrates, larvae do not have an adaptive immune system, although the innate system has proven to have similarities to that of humans 8 .Despite G. mellonella has not antigen-specific memorybased adaptive immunity, there is emerging evidence that larval immune responses are greatly enhanced in response to reinfections.Moreover, this immunological "memory" is epigenetically inherited by subsequent generations of insects 9 .In addition, recent studies have reported that immune priming is more similar to the phenomenon of "trained immunity" of vertebrate cells than to adaptive immunity per se 10 . There is great interest in developing immunological methods that require fewer mammalian or non-mammalian donors.Thus, the use of insects as research models urgently requires the development of methods for working with hemocytes. Cells of the human immune system are called leukocytes (or white blood cells).The leukocytes of innate immunity are classified as granulocytes (neutrophils, basophils, and eosinophils), macrophages, mast cells, dendritic cells (DCs), and natural killer (NK) cells.Each subpopulation owns its specific immunophenotypic profile, a process used to identify cells based on the types of antigens or CD markers on their surface 11 , typically run out by flow cytometry.In parallel, six types of hemocytes have been identified in the hemolymph of G. www.nature.com/scientificreports/mellonella: plasmatocytes and granular cells, which are mainly involved in phagocytosis; pro-hemocytes, which may be stem cells able to differentiate into other hemocyte types; coagulocytes, participating in the hemolymph coagulation; spherulocytes, which mediate the secretion of cuticular components, and oenocytoids, involved in melanization 3,5,12 .Many studies of insect cells employ flow cytometry.A flow cytometric analysis has been used to characterize silkworm hemocytes from Bombyx mori using a fluorescent lectin staining 13 .Furthermore, a novel protocol has been established by Wrońska and co-workers 2 for the intracellular cytokine detection based on flow cytometry in hemocytes from G. mellonella larvae.To the best of our knowledge, a study presenting a flow cytometric approach to analyze the immunophenotype of hemocytes in terms of expression of antigens on their membrane has not been reported, due to the lack of specific antibodies design for this usage. Considering the reported shared characteristics between the human innate immune system and the one of G. mellonella, and that granular cells possessing macrophage-like functionalities have been identified in the hemolymph of G. mellonella 5 , an infection model of G. mellonella has been established with the aim to perform immunophenotypic analyses in vitro using anti-human antibodies towards macrophage CD markers modulated upon inflammation/infection. S. aureus is an opportunistic pathogen responsible for nosocomial infections and a plethora of diseases ranging from skin infections to pneumonia, osteomyelitis and sepsis 14 .The treatment of such conditions is difficult due to the development of antimicrobial resistance versus drugs commonly used in therapies.In particular, Methicillin-Resistant S. aureus (MRSA) is associated with severe infections with increased mortality and morbidity 15,16 .The infection with S. aureus MRSA thus mimics an in vivo human sepsis, subsequently treated with Vancomycin to which the strain is susceptible, according to the MIC value.This study aims at evaluating the modulation of G. mellonella immune system over time and at comparing the results with the ones obtained in vitro by stimulating human undifferentiated monocytes and macrophages with LPS.For comparison, larvae were stimulated also with LPS, to mimic the infection from Gram negative bacteria.The data obtained confirmed a similarity between the in vitro and in vivo models, suggesting that the characterization of the immune system of G. mellonella could represent a suitable pre-clinical model to validate the anti-inflammatory and the immunomodulatory properties of natural and synthetic compounds. CD14 is a lipopolysaccharide-binding protein, functioning as an endotoxin receptor.It is found strongly expressed in monocytes and most tissue macrophages but monoblasts and promonocytes are weakly positive or negative for this marker.Myeloblasts and other granulocytic precursors do not express CD14, but neutrophils and a small proportion of B lymphocytes may weakly express it 17 .In parallel, adhesive interactions between CD44 and hyaluronan (HA) have been implicated in the regulation of immune cell trafficking within various tissues.More in details, it has been found that CD44 is involved in the leukocyte recruitment cascade upon inflammation and infection (rolling, firm adhesion, trans-endothelial migration and chemotaxis) 18 .In our experimental model, undifferentiated monocytes, and macrophages strongly express CD14 and CD44, independently from the LPS presence, as expected.On the other hand, CD14 and CD44 are differentially expressed in the hemocyte population upon the experimental conditions, disclosing a more heterogeneous distribution of cell population among the whole cell fraction analyzed.In details, the increase of CD14 and CD44 positivity in the infected hemocyte population at 3 h and 24 h, is paralleled by a decrease when larvae are exposed to Vancomycin.Data disclose that CD14 and CD44 are weakly expressed by hemocytes under basal conditions and thus their expression is inducible and modulated by infection.CD14 + cells are considered to be mostly macrophages and monocytes, although some studies indicate that neutrophils express CD14 at low levels 19 .Additionally, evidence suggests that CD44 is a physiological human neutrophil E-selectin ligand 20 .This observation lay the grounds for further in-depth analysis to decipher the modulation of CD14 and CD44 on the hemocyte membrane, suggesting the investigation of markers more related to the neutrophil population. According to their inflammatory status, macrophages are classified as classically activated (pro-inflammatory, M1), non-activated (M0) and alternatively activated (anti-inflammatory or pro-resolving, M2) cells, a classification also associated with distinct genetic profiles and expression of specific surface markers.CD163 is a macrophage specific scavenger receptor for haptoglobin-hemoglobin complexes found on the cell membranes of M2 macrophages.Its expression is strongly induced by the anti-inflammatory cytokine IL-10, making CD163 a marker of the anti-inflammatory process occurrence.On the contrary, CD80 + cells are classically M1 macrophages 21 .As expected, CD80 and CD163 are expressed on human macrophages but not in undifferentiated monocytes, and their expression is amplified by the presence of LPS.In the hemocyte population, CD80 cell positivity is weak and seems weakly influenced by the presence of the infection induced by S. aureus.In parallel, there is an increase in the CD163 cell positivity in Sa-hem after 3 h but levels are comparable among the experimental conditions after 24 h.This observation reinforces the precedent one, which is that a macrophage-like population is present in the hemocyte population but might disclose different markers onset compared to the human one. Finally, expression of CD200 was analyzed.CD200 is a transmembrane protein related to the B7 family of co-stimulatory receptors involved in T-cell signaling and likely plays a role in physiologic immune tolerance.It is normally expressed on lymphoid and neuronal tissues, and its receptor, CD200R, is found on antigenpresenting cells and T-cells.Additionally, the classical macrophage activation is inhibited by the CD200 receptor (CD200R) and the CD200/CD200R immune-checkpoint pathway has been widely demonstrated to maintain immune homeostasis during infection by preventing excessive activation of macrophages 22 .Our data confirm that CD200 mediates the inflammatory response in the macrophage population, being the percentage of positive cells increased in the presence of LPS.In the hemocyte population, CD200 increases in the untreated population over the time of the experiment and decreases in infected cells, and then increases again after 24 h with Vancomycin (Fig. 3f).To survive in a host, many bacterial and parasitic pathogens adopt the CD200-CD200R axis by modulating the expression of either CD200 or CD200R1, which in turn attenuates the innate immunity.For example, Leishmania amazonensis induces the expression of CD200 both at mRNA and protein levels in bone marrow macrophages, which next inhibit neighboring macrophages expressing CD200R1, and thus, abrogating www.nature.com/scientificreports/nitric oxide (NO) production during the infection 23 .It has been reported that CD200 significantly suppresses the S. aureus-induced production of NO and pro-inflammatory cytokines in mouse macrophage 24 .The decreased expression of CD200 upon S. aureus infection suggests therefore a more in-depth analysis of this pathway in the hemocyte population. Our novel approach confirms that invertebrates and vertebrates share evolutionary conserved components among innate immune responses 3,25 as hemocytes from G. mellonella larvae react with anti-human antibodies commonly used for immunophenotyping in vitro.In parallel, our analysis discloses that the hemocyte population is highly heterogeneous, being the immunophenotypic profile of hemocytes significantly different from the one of a homogeneous monocytic/macrophagic cell line.Therefore, a wider panel of CD markers related to the whole leukocyte population might be established to discriminate the various population subtype and to deeply understand the molecular mechanism underlying the hemocyte activation.Finally, it has been demonstrated that hemocytes from G. mellonella are highly responsive to infections/inflammation insults and their immunophenotype is modulated by drugs in parallel, equally to human blood monocytes.Although with limitations regarding the use of anti-human antibodies to discriminate hemocytes from an invertebrate, profiling the immunophenotype of G. mellonella larvae in vitro could be therefore a suitable tool for the screening of vaccines and new compounds and substances acting as immunomodulators or antibiotics. G. mellonella larvae G. mellonella larvae were obtained from the laboratory colony available at the Department of Agricultural and Food Science (DISTAL) of the University of Bologna (Italy) and stored in the dark at 37 °C until use. At DISTAL, the colony was maintained at 30 ± 1 °C, 65 ± 5% RH, 0:24 L:D photoperiod, according to the methods described by Dindo and Francati 27 .The larvae were kept in plastic boxes (24 × 8 × 8 cm) and fed on the artificial diet developed by Campadelli 28 composed of skimmed milk powder, white wheat flour, whole wheat flour, maize flour, brewer's yeast, beeswax, wildflower honey and glycerin.To obtain eggs, about 100 cocooned mature larvae approaching pupation were placed in plastic boxes (3.5 L volume) with a 6-cm diameter hole on their lids.Holes were covered with filter paper discs which were fixed to the lids with adhesive tape.Adults emerged 5-6 days after pupation, mated and females laid eggs on the paper disc.Eggs were collected 2-3 times a week and placed in new boxes with diet, which was supplied every 2-3 days until the end of larval development, which proceeds through 6-7 instars and lasts approximately 30-35 days.Adult moths do not feed 29 .Sixth-instar larvae (about 2 cm long) were used for the experiments.24h before the experiments, larvae were weighed, selected and kept in a separate box without diet. Infection of G. mellonella larvae S. aureus ATCC 43300 was used in this study.The strain was cultured on Mueller Hinton Agar plates (MHA, Oxoid) at 37 °C in aerobic conditions.Bacteria were transferred in 8 mL of Mueller Hinton II Broth (MH2B; Sigma-Aldrich) and incubated at 37 °C, 125 rpm in aerobiosis.After 16 h of incubation, bacteria were harvested by centrifugation at 10.000 rpm for 5 min, 4 °C.The supernatant was discharged, the cellular pellet was washed in Phosphate Buffered Saline (PBS; Sigma Aldrich) followed by another step of centrifugation as before.Bacterial cells were re-suspended in PBS and the optical density was measured at 600 nm (OD 600 ) to obtain the proper bacterial suspension.Larvae weighing within 200-250 mg were selected for the experiments.Each administration required the injection of a volume corresponding to 10 μL in the third left pro-leg of the larva (Fig. 2).Two injections were required, therefore, the second one was performed on the third right pro-leg of the larva. To determine the optimal infection dose, groups of G. mellonella larvae (n = 10 per group) were injected with different suspension of S. aureus (10 6 -10 5 -10 4 CFU/larva) and incubated in Petri dishes at 37 °C for 4 days to score mortality.Controls groups included: (I) untreated larvae; (II) PBS injected larvae.After establishing the proper inoculum dose, suitable for our experiments (10 6 CFU/larva), the in vivo efficacy of different doses of Vancomycin (V) was assessed.At 30 min post-infection, larvae were randomized to receive 1 mg/kg, 10 mg/ kg or 50 mg/kg 30 of the antibiotic and then incubated at 37 °C in a Petri dish to score mortality.The Minimum Inhibitory Concentration (MIC) of Vancomycin for S. aureus ATCC 43300 corresponded to 1 μg/mL and was previously determined in vitro via the broth microdilution method 31,32 .After determining the proper bacterial inoculum (10 6 CFU/larva) and the Vancomycin dose (50 mg/kg), G. mellonella larvae were divided into two groups of treatment, i.e. (I) S. aureus 10 6 CFU/larva and (II) S. aureus 10 6 CFU/larva + V 50 mg/kg.An additional group was injected with LPS from E. coli: (III) LPS 1 µg/larva.The controls groups included: (I) untreated larvae (UC); (II) PBS injected larvae; (III) S. aureus 10 6 CFU/larva + PBS (30 min apart); (IV) PBS + V 50 mg/kg (30 min apart); (V) PBS + PBS (30 min apart). Hemolymph extraction Hemolymph extraction was performed at different time points: (I) T0, immediately after injection; (II) 3 h post injection; (III) 24 hours post injection.Each larva was anesthetized on ice for 1-2 min before gently cut one of the last abdominal segments with a scalp (Figure 1a, b): the hemolymph was allowed to drain out and collected in a sterile tube (Figure 1c).After collecting the hemolymph from two larvae belonging to the same treatment group, 20 μL of the hemolymph pool were harvested with a micropipette, transferred in a sterile tube and mixed with 100 μL of an anticoagulant solution (93 mM NaCl, 100 mM glucose, 30 mM trisodium citrate, 26 mM citric acid, 10 mM Na 2 EDTA, and 0.1 mM phenylthiourea, pH 4.6), prepared as reported elsewhere 33 . Cell count and immunophenotyping of undifferentiated monocytes, in vitro differentiated macrophages and hemocytes isolated from G. mellonella After the established exposure times (T0, 3 and 24 h), the number of undifferentiated monocytes, macrophages and hemocytes from G. mellonella was assessed by flow cytometry (CytoFLEX, Beckman Coulter, CA, USA). As for FACS analyses, before running samples, cells were stained by propidium iodide (PI).Briefly, PI 10 µg/ mL (stock solution = 1 mg/mL) was added to each sample for 10 minutes.Next, cells were run at FACS and gated by their morphological parameters (Side Scatter/Forward Scatter, SSC/FSC) excluding the necrotic population (Fig. 5).Next, defined flow rate (medium) and acquisition time (1 min) were set up.Data were expressed as the number of cells in the morphological gate of viable cells refracting the laser emission within 1 minute through the CytExpert Software 5.0 (Beckman Coulter, CA, USA). Next, the expression of surface markers (CDs) was analyzed using flow cytometry.After the exposure times, cells were harvested, collected by centrifugation in the cold, and washed once with FACS buffer Cells were incubated with fluorochrome-conjugated antibodies (1:50 dilutions) in 50 μL of FACS buffer for 15 min in the dark.Cells were stained separately in each single screening tube with a panel of anti-human mouse monoclonal antibodies: CD14-FITC, CD44-FITC, CD80-PE, CD163-PE and CD200-PE (all purchased by BD Biosciences, MA, USA).Then, the excess of antibodies was removed by adding fresh FACS buffer and by centrifugation.Before running 20,000 events (for monocytes and macrophages) and 10,000 events (for hemocytes) in a Beckman Coulter CytoFLEX flow cytometer (Brea, CA, USA), cells were incubated with propidium iodide (PI) to exclude necrotic PI-positive cells from the analysis (Figure 5).Relative fluorescence emissions of gated cells by forward and side scatter properties (FSC/SSC) were analyzed using the CytExpert Software (Beckman Coulter) and results were expressed as the percentage of positive cells for each CD marker.Individual values obtained from independent experiments (n = 6) were summarized as means and standard deviations. Figure 2 . Figure 2. Cell count and immunophenotypic profile of human undifferentiated monocytes and macrophages at T0 and after 3 and 24 h from the LPS-stimulation.UC = untreated cells; LPS = cells stimulated by 0.5 µg/mL of LPS.(a) The dot graph displays the cell count expressed as cells/minute (*p < 0.01 between samples marked by lines).(b) Bar graphs show percentages of cells stained positive for anti-human mouse monoclonal CD14-FITC and CD44-FITC.(c, d, e) Bar graphs show percentages of cells stained positive for anti-human mouse monoclonal CD80-PE, CD163-PE and CD200-PE.*p < 0.01 and ***p < 0.0001 between samples marked by lines.°p < 0.01 and °°°p < 0.0001 between samples in the same experimental condition at different exposure times (T0 vs. 3 h and 3 h vs. 24 h).(f) Peaks of fluorescence emission (PE = phycoerythrin) generated by flow cytometry related to CD80, CD163 and CD200 expression in macrophages after 24 h of treatments (x-axis: cell count; y-axis: PE emission). Figure 5 . Figure 5. Gating strategy for the immunophenotype analysis performed by flow cytometry.(a) Side scatter/Forward scatter (SSC/FSC) dot plots represent morphological parameters of unstained monocytes, macrophages, and G. mellonella-derived hemocytes.(b) Each cell type was incubated with anti-human clusters of designation (CDs)-fluorochrome conjugated and propidium iodide in parallel to assess the necrotic cell population.Cells stained positive were gated and labeled as necrotic.(c) The cell population excluded from the logical gate "necrotic" (in blue) represents the viable population used for further analyses.(d) The SSC/ FITC (fluorescein) or PE (phycoerythrin) dot plots represent cells incubated with antibody isotypes (FITC and PE-conjugated) used to set the fluorochrome threshold (negative control).Right-shifted cells in the FITC and PE gates were considered positive for the marker analyzed.Data were expressed as the percentage of positive cells.(e) Histograms show peaks of fluorescence emission in the FITC and PE channels.Right-shifted peaks are directly proportional to the number of positive cells for the marker analyzed and they are expressed as mean fluorescence intensity (MFI).
6,206.2
2024-01-08T00:00:00.000
[ "Biology", "Medicine" ]
Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Introduction There is a trend towards greater realism using individual-based models within the ecological and epidemiological modelling community (Grimm et al., 2006;Bansal et al., 2007;DeAngelis and Grimm, 2014;Heesterbeek et al., 2015). The strength of this approach lies in its ability to directly address policy-relevant questions, however properly estimating model parameters and measuring uncertainty in fits is often problematic/challenging (Deardon et al., 2010;Grimm and Railsback, 2013). In addition, the data will often be highly heterogeneous making model fitting difficult. Examples of this in epidemiology include both human and animal parasitic infections, such a soil-transmitted helminths and nematodes, where the variance in egg counts can be bigger than the mean (Shaw et al., 1998;Elkins et al., 1986;Grenfell et al., 1990). Data may also come in the form of multivariate time-series, such as number of diagnoses in different disease stages or different age-categories or age/risk/disease stage-stratified prevalence (Hollingsworth et al., 2008;Pullan et al., 2014). These data can be challenging to fit as it can be noisy and may not be easily modelled by simple distributions. Complex individual-based models will often have computationally-intractable likelihoods, or likelihoods that are not easily defined or applied to data. In such cases, approximate Bayesian computation (ABC) has been proposed as a valid approach to model fitting (Csilléry et al., 2010). ABC has primarily been used to fit approximately Gaussian or Poisson-type data in the context of epidemiology (McKinley et al., 2009(McKinley et al., , 2014Beaumont, 2010;Walker et al., 2010;Kypraios et al., 2016). Other data sources have been incorporated into model fitting using ABC, such a phylogenetic data (Tanaka et al., 2006;Luciani et al., 2009;Ratmann et al., 2012). It is often not clear what choice of summary statistic should be used and this is often domain specific, which can prevent these methods being applied elsewhere (Luciani et al., 2009;Marin et al., 2012). Whilst these are general problems, they are of particular relevance in the calibration of complex individual-based models designed for policyrelevant questions. In this paper, we consider the case of lymphatic filariasis transmission. Lymphatic filariasis (LF) or elephantiasis is a neglected tropical disease, with over 40 million individuals displaying clinical manifestations of the disease, and with 53 countries requiring preventative chemotherapy. It is currently targeted for elimination as a public health problem by the World Health Organisation (WHO) by 2020 through the use of mass drug administration (MDA) (Rebollo and Bockarie., 2013;World Health Organization et al., 2011;Ottesen et al., 2008Ottesen et al., , 1997. As with many public health interventions, there is a certain amount of systematic non-adherence or heterogeneity in the use of interventions (Dyson et al., 2017). Coupled with this is the large amount of heterogeneity in exposure to infection across individuals. These complexities require that transmission models take into account the vector and parasite biology and human social factors (Irvine et al., 2015;Stolk et al., 2008;Chan et al., 1998). Due to the sparse nature of the data, parameter uncertainty in the fitted models must also be estimated if robust predictions are to be made (Singh and Michael, 2015). ABC then offers a strong alternative to other techniques for fitting complex individual-based models, which can also include uncertainty in the model parameters (Beaumont, 2010). We developed a robust, adaptive ABC scheme for infectious disease epidemiological data. This approach incorporates a parameter-free method of estimating the distribution of the data and includes an adaptive scheme for selecting tolerance values. We have developed this scheme as an open-source python library with examples demonstrating its use. In the first section of this paper, we directly compare ABC to a more standard Bayesian fitting technique as an example of where the likelihood is known, by modelling counts drawn from a negative-binomial distribution. We vary the heterogeneity (shape parameter) in the distribution to investigate how the fitting performs for different degrees of heterogeneity. We compare how well fitting performs as the number of tolerance levels and number of particles (parameter sets) changes, showing how the automated tolerance selection procedure produces accurate model fits. In the next section we apply the technique to two simple individual-based models, which include overly-dispersed one-dimensional data and two-dimensional time-series data respectively. The results show that this technique is amenable to a wide range of models and data with little coding overhead or hyper-parameter tuning. Finally we demonstrate the technique on a complex individualbased model of LF and show how disparate forms of data can be included in the model fitting process, highlighting the ease of incorporating multiple data sources into the fitting (Smith et al., 2017). Epidemiological count data Count data such as number of diagnosed cases in one year or parasite/viral load per patient are abundant in epidemiology. Often these data will be treated as being drawn from a Poisson distribution (Wakefield, 2007;Pullan et al., 2012). This is where the data is drawn from a probability distribution of the form The Poisson distribution is special because the mean and the variance are equal. Whilst there is some theoretical justification for this, often sources of data can be more over-dispersed, where the variance of the distribution is greater than the mean. In this case the data can be described as a negative-binomial. The issue is then how to measure the amount of overdispersion. Techniques will often focus on a particular distribution such as maximum likelihood or Bayesian Markov chain Monte Carlo (MCMC). These techniques have proved highly effective for models where the underlying rates (such as those produced from deterministic differential equation models) can be described. Individual-based and other stochastic complex models are not amenable to this technique, however, and so approximate fitting methods have been considered, such as ABC. It is not clear, however, how to incorporate an appropriate goodness of fit metric for over-dispersed data (for example comparing the means would not be able to capture the heterogeneity in the distribution). Here we propose the use of kernel density estimation in order to resolve this problem. Kernel density estimation (KDE) is a non-parametric scheme for approximating a distribution using a series of kernels, or distributions (Bishop, 2006). The technique has previously been applied to approximating the likelihood of a summary statistic (Fearnhead and Prangle, 2012;Gutmann et al., 2016). However we use it here to directly compare between the modelled and real data. An important benefit of this approach is that, unlike with histograms, where placement of bins is important, kernels are centred on each data point and hence bins do not need to be selected. Often a Gaussian kernel is chosen to represent the data, this has the useful property of allowing the distribution to be defined everywhere in parameter space, thus making it possible to compare two empirical distributions. Without this property, the methodology would be unable to compare between two different empirical distributions if there was not significant overlap. Overview of ABC methodology ABC is a technique used to perform Bayesian inference when a likelihood is either computationally intractable or not feasible to define. As an alternative, a sufficient summary statistic is used for the model data and compared to the data to be fitted. A distance metric is used to define the error between the data drawn from the model and the real data. As the error between the summary statistics of the model-generated data and real data approaches zero, the posterior distribution is approximated with greater accuracy (Csilléry et al., 2010;Beaumont, 2010;Kypraios et al., 2016). More precisely, the function f summarises the data D in some form, for example, the mean parasite load in certain age-groups. For particular model parameters, θ, the model produces output M* θ , where the star denotes this is a realisation of the model-data and is subsequently a random variable. We then define a distance metric, ρ, which compares the summary statistic from the data, f D ( ) with that from the model, f M ( *) θ . The posterior is then approximated as the probability that the distance metric is below a threshold, ϵ, expressed as The error in the approximation is assumed to decrease as the threshold, ϵ, decreases, with the method being exact when the threshold is zero (Rubin et al., 1984). This approximation is dependent on the choice of summary statistic f and distance metric ρ, which are often problemspecific. The approximation also requires an appropriate choice of ϵ to increase accuracy and decrease computation time. If ϵ is too large then the drawn samples are often a poor approximation of the posterior, and if ϵ is too low, then only very rarely would sampled M* θ meet the criterion leading to increased computation time. One of the simplest conceptual algorithms for ABC is a partial rejection scheme where a particle (parameter set) θ is drawn from the prior distribution Θ. This particle is then used in the model M to produce some sample data M* θ . The sample data M* θ is then compared to the data D using the distance function ρ that gives a single-value for the discrepancy between the model data and the real data. This particle θ is then accepted if this discrepancy is below a pre-defined tolerance ϵ and rejected otherwise (Wilkinson, 2013) (e.g. for its first use see Pritchard et al., 1999, and see Blum and Tran (2010) for a smoothed rejection scheme applied to fitting an SIR model). In reality, this scheme can be inefficient if the prior is not similar to the posterior meaning that many particles are rejected. Also if the tolerance is too large then the sample of particles will be closer to the prior than the posterior. This means the scheme needs to be fine-tuned and may be impractical for most cases. A way of overcoming the low particle acceptance rate issue is to start with a large tolerance ϵ and then to proceed as above until the desired number of particles are selected (Fig. 1). These particles can then be used to generate an empirical distribution that can then replace the prior in the algorithm. The tolerance can then be lowered and the rejection scheme can be repeated until the desired number of particles are sampled. This scheme provides a way of lowering the tolerance to increase the accuracy, whilst also overcoming the issue of a small acceptance rate (Walker et al., 2010). The distribution of tolerances will depend heavily on the number of particles used, here we explore how the number of particles affects the final distribution (see supplementary material). The challenges with this scheme are to choose a set of tolerances, {ϵ } t , to efficiently reduce the error in the samples. Typically a set is chosen prior to fitting. We considered two schemes for tolerance selection. The first is to generate a set of tolerances by sampling the prior distribution (Faisal et al., 2013). By drawing two sample particles An alternative way of selecting tolerances is to do it adaptively, based on the distribution of errors that were accepted in the previous iteration (Beaumont et al., 2002). This is accomplished by recording for particle i, the accepted error τ i . The tolerance in the next iteration can then be chosen as some percentile of these values. Here, we adapted a scheme where the 50th percentile of these values was set as the new tolerance in order to keep the acceptance rate at reasonable levels. We found the adaptive scheme consistently outperformed the prior distribution scheme and as such we only consider the adaptive scheme here. We considered data derived from both one-dimensional and twodimensional distributions. The particular form of the summary statistic chosen for all examples was an empirical distribution derived from count data. Certain summary statistics and distance metrics such a the mean squared error between time-series data have underlying assumptions of normality and unimodality (Walker et al., 2010;Brown et al., 2018). We instead, adopt a scheme that is capable of incorporating a general distribution by using a non-parametric method to approximate the underlying probability density function f of the data. Note that here, as a simplification, discrete distributions are approximated by a continuous distribution. This was achieved using a Gaussian kernel-density estimator for the distribution. An empirical distribution fˆfrom count data y { } i was produced using a Gaussian kernel K by Although each data-point is represented as a Gaussian with a small variance, the total distribution does not need to have the same properties and can for instance have higher variance or be multi-modal (Silverman, 1986). In order to compare between the two approximated distributions the non-symmetric Kullback-Leibler (KL) divergence was used. This measures the difference between the KDE-approximated probability distribution derived from the model data p and the KDE-approximated probability distribution derived from the real data q. It is defined as where the divergence is greater than zero if the probability distributions differ and is zero if the distributions are equivalent. This method can also be easily adapted to a multivariate distribution, where an n-dimensional symmetric Gaussian with a fixed variance in each dimension can be used in the KDE step. The calculation of the KL divergence can also be extended by integrating over the entire support of the probability density function derived in the KDE step. Our combined adaptive scheduling partial rejection control with kernel density estimation algorithm is as follows (Fig. 1). A number of particles (parameter sets) are drawn from the prior distribution P θ ( ) to produce a set of particles θ { } i 1 . An initial tolerance value ϵ 1 is found by selecting the median value of the KDE KL divergence between the data and model derived data from the selected particles. A new set of particles is generated by randomly sampling from θ { } i 1 and perturbed using a zeromean Gaussian random variable with small variance. The newly generated , else it is rejected and another particle is generated according the procedure defined. Once the desired number of particles have been accepted the tolerance is lowered adaptively by selecting the median value of the accepted tolerances from the previous iteration. A new set of particles is then generated as before with the lowered tolerance ϵ. Once the particles are generated for the smallest tolerance, ϵ T , the algorithm terminates and these are used as the sample for the posterior. A summary is given in Algorithm 1. We also show for a Gaussian likelihood that the minimization of the KL divergence with a KDE representation of the data is equivalent to maximising the likelihood (see supplementary material). Algorithm 1. Adaptive ABC partial rejection control. In the first step, these particles are drawn from a prior distribution, which is uniform between two values (top row, corresponds to steps 1-2 in Algorithm 1). For a given tolerance, a new particle is drawn for the updated tolerance ϵ 1 by choosing a particle at random, perturbing it slightly and then running a model evaluation (step 5-6 in Algorithm 1). It then checks if the tolerance of that particle is below ϵ 1 , the particle is then either accepted (blue) or rejected (shown in red) (steps 7-8 in Algorithm 1). This procedure continues until all N particles are accepted at the new tolerance level (steps 9-10). As the tolerance decreases, the particles converge onto the target distribution (shown in the bottom row). (For interpretation of the references to color in text/this figure legend, the reader is referred to the web version of the article.) 2.3. Example applications of the method 2.3.1. Example one: negative binomial distribution As a first example in order to compare how fitting using our ABC scheme compares to other fitting techniques, samples were drawn from a negative binomial distribution with varying mean and heterogeneity parameter k. When < k 1, the distribution is over-dispersed, with a greater variance to mean ratio than expected under a Poisson distribution. This means that the distribution is more heavy-tailed than for an equivalent Poisson distribution. When > k 5, the distribution is less over-dispersed and small samples more closely resemble a Poisson distribution. In order to test how the parameter fitting performs for increasing heterogeneity (decreasing k), a sample is drawn from a negative-binomial parameterised by the mean m and heterogeneity k, The likelihood for an independent and identically distributed sample m was varied between 1 and 100 and k was varied between 0.1 and 5. In order to be consistent between the samples the prior used in ABC was fixed for all samples before observing the data. Exponential priors were used with means of the distributions chosen to be the average of the ranges explored for m and k. A Metropolis-Hastings MCMC scheme was also implemented and fitted to the negative-binomial count data (Gilks et al., 1995). The same priors that were used for the ABC scheme were also used for the MCMC scheme to provide a faithful comparison. The impact of number of particles and size of tolerance were also explored using this model. For fixed parameters ( = m 50, = k 3.0) the derived distribution was estimated for tolerances from 1 to 25 and particle numbers 10 to 200. The resulting estimated posterior was then compared to the true posterior (derived from the MCMC scheme). The previous example can be easily implemented in the developed Python library with code that sets up a function that outputs an array of samples drawn from a negative binomial distribution for inputs m and k (denoted ibm), defines the priors as a list of functions that generate a sample for each parameter (denoted priors), provides the fitting object with the individual-based model, the data (denoted xs), the priors, and sets the method and number of steps to iterate through (denoted by the method setup) (Listing 1). The method is then run with a specified number of particles (denoted by the method run). Listing 1. Code for negative binomial distribution example. Example two: parasite model As a simple epidemiological example, we propose an individualbased model where individuals acquire parasites at a constant rate that is drawn from a gamma-distribution with mean λ and shape parameter k. Each parasite within individuals are lost at a constant rate δ. When k is low the distribution of parasites is more heterogeneous with many individuals uninfected, but with a few highly infected, with very large parasite numbers. Schematically, the parasite dynamics within an individual P i can be written for each individual i; → − P P δ P 1 at rate , where b i is a random variable drawn from a gamma distribution with shape k and mean 1. This model could easily be extended by making the force of infection dependent on the current distribution of parasites as well as other factors such as environmental heterogeneity. It is however, meant to be instructive and as such the simplest form was used. Example three: stochastic SIS model The stochastic Susceptible-Infected-Susceptible (SIS) model was implemented as an example of time-series data that can be estimated using a two-dimensional distribution approach. The model can be described as a Markov Process with two events: an infection and a recovery. For a population of size n, with the number of infected I , the infection and recovery events occur according to → + − I I β n I I 1 at rate ( ) , → − I I γI 1 at rate . The parameters β and γ can be reparameterised using the basic reproduction number R 0 and the expected time to recovery − γ 1 as = − β R γ / 0 1 and = − γ γ 1/ 1 . The model was simulated in discrete timesteps using a tau-leaping algorithm and the corresponding likelihood was calculated using the corresponding transition rate matrix for the Markov Process (see supplementary material). In order to utilize this data with the KDE approach described we may convert the one-dimensional time-series data into two-dimensional distribution data in the following way, where we can explicitly take advantage of the Markov property of the underlying model. Each row in the matrix is a + I I ( , ) t t 1 pair which are points in 2D and can therefore be used to build up a two-dimensional probability density function (an example of this is shown in Fig. 3b). With the given data representation the methodology is implemented in the Python package in exactly the same way as for the one-dimensional negative-binomial example. The output of the model function (ibm) is used to determine the dimension of the data and the list of prior random variable generators are used to determine the size of the parameter space in the model code. Example four: lymphatic filariasis We used a stochastic individual-based model of lymphatic filariasis (Irvine et al., 2015). The model is a multi-scale stochastic simulation of individuals with worm burden, microfilaraemia (prevalence of the prelarval stage of LF in the peripheral blood) and other demographic parameters relating to age and risk of exposure. Humans are modelled individually, with their own male and female worm burden denoted W i m and W i f . The density of microfilariae (mf) in the peripheral blood is also modelled for each individual and denoted M i . The total mf density in the population contributes towards the current density of L3 larvae in the human-biting mosquito population. The model dynamics are divided into the individual human dynamics, including age and turnover; worm dynamics inside the host; microfilariae dynamics inside the host and larvae dynamics inside the mosquito. Five villages in the East Sepik Province of Papua New Guinea have been the focus of extensive research into filariasis epidemiology and transmission (Bockarie et al., 2003(Bockarie et al., , 1998Michael and Singh, 2016;Irvine et al., 2018). These villages received annual mass drug administration from 1993 through 1998, with no further interventions until bed-nets (LLIN) were distributed in August 2009. Self reported LLIN use ranged from 75% to 90% (Reimer et al., 2013). Microfilaria prevalence were measured in these communities in 2008 as part of the post-MDA evaluation (Reimer et al., 2013). This was done by a BinaxNow filariasis antigen test and by microscopic evaluation of 1 mL filtered venous blood, collected at night. The age of participants was also recorded. The KDE ABC methodology was implemented on three geographically variable parameters, the vector to host ratio V/H, the heterogeneity of bites k and the probability of an infective bite leading to an establishment of an adult worm s 2 . Each of these parameters were fit separately to the mf count distribution for each village. This was then compared to when the age-prevalence data was also included in the model fitting. Age-prevalence data was incorporated through the use of a mean squared distance function in addition to the KDE KL divergence function for the mf count data. Implementation The methodology and models were implemented in Python 2.7 (Python Software and Foundation, 2018), using the packages SciPy & NumPy (Van Der Walt et al., 2011) and seaborn for data visualisation Fig. 2. Comparison between MCMC and ABC methods for fitting a negative binomial distribution for a range of mean, m and heterogeneity k. (a) Comparison between fits for different mean values m, the dashed line represents the true values and the shading represents the 95% and 50% percentile range of the prior distribution, with the median given as a solid line. The prior distribution was kept fixed for each fitting. The adaptive KDE scheme closely matches the MCMC scheme for all values considered. When the resulting fit is biased for ABC, it is also biased in the same way for MCMC providing confidence that the scheme is approximating the true posterior. (b) Comparison between MCMC and the adaptive ABC scheme for heterogeneity k. As > k 3 both the MCMC scheme and adaptive ABC scheme underestimate the true value in a consistent way due to the influence of the prior. Comparison between fitted distributions of the adaptive ABC scheme against the number of adaptive tolerance steps are shown for (c) m and (d) k. The true posterior calculated using MCMC is represented as a series of shaded regions with the 95% credible interval, 50% credible interval, and the median shown from lightest to darkest respectively. (For interpretation of the references to color in text/this figure legend, the reader is referred to the web version of the article.) (Waskom et al., 2014). An open-source python library, including examples can be found at the following URL: https://github.com/ sempwn/ABCPRC. This library has been tested for both Python version 2.7 and version 3.6. Drawing from a negative binomial distribution MCMC was directly compared to the adaptive ABC method using samples drawn from a negative binomial distribution with a range of means m and heterogeneities k (Fig. 2). The ABC scheme was ran on 100 particles over 25 tolerance steps, while the MCMC scheme was ran for 10,000 steps with a burn-in period of 2000 steps and a fixed stepsize. Visual inspection was used to determine the convergence of the MCMC chains. Exponential priors with rate 50 and 1 were used for m and k respectively. For small k, samples from the distribution are more over-dispersed and larger k values more closely approximate the Poisson distribution. For all mean m values considered both the MCMC method and ABC method closely match the true value (Fig. 2a). As the size of m grows so does the size of the 95% credible interval in both cases. Where the model fit is biased due to the data realisation producing more than expected lower probability samples (e.g. mean value 50), both MCMC and ABC are biased in a consistent way. This provides more confidence that the scheme is recovering the true posterior distribution. Further evidence of this can be seen in the fitting as heterogeneity k varies (Fig. 2b). Here the prior is stronger, with a smaller 95% interval relative to the parameter range considered. For small values of k, the estimated posterior distributions closely match the true values. As k increases above 3 the true value moves outside of the prior's 95% range and thus begins to have more influence on the posterior. This can be seen as the expected value of k estimated from both methods is consistently lower than the true value. The number of adaptive tolerance steps strongly influences the estimated posterior for both the mean m (Fig. 2c) and the heterogeneity k (Fig. 2d). For a small number of steps (1-5), the estimate more closely resembles the prior distribution than the posterior distribution. From 7 to 9 steps the estimate is a combination of the prior and posterior distribution. For values above 10, the distributions closely match the true posterior. It should be noted that these values would likely change depending on the model and data, although we have found that 20 or more tolerance steps is sufficient for the estimate to converge to the posterior for the examples considered here. The number of particles was also considered in how this hyperparameter impacts the estimated posterior. Neither the expected value or the range were consistently effected by the particle size and even a small number of particles could approximate the true posterior reasonably well (see supplementary material). This suggests that if model evaluations are costly, then a small number of particles can be used to approximately determine the posterior before running the method on a large particle size. Host-parasite model The method was applied to the simple individual-based model of parasitic infection. A data sample was produced from the model (parameters: = λ 10, = δ 0.5, = γ 1.0) and used in the fitting procedure. All parameters were given exponential prior distributions with mean rates broad enough to capture most dynamics. As the tolerance reduced, the variance in each of the marginal distributions lowered. The final distribution was unimodal for each parameter, with modal values close to the true underlying values. The final distribution also captures correlations between certain parameters such as between mortality and infection rate (see Fig. 2, supplementary material). SIS model A realisation of the SIS model was taken with parameters = R 2 0 , population size n = 100, and recovery time = − γ 1 1 , for 100 time-steps (Fig. 3a). The corresponding joint distribution of the + I t 1 , I t data was approximated using a two-dimensional KDE (Fig. 3b). Here the joint distribution was approximately a bivariate correlated Gaussian, where the number of infected at time + t 1 was strongly dependent on the number of infected at time t. The empirical distribution also has a longer tail than expected for a Gaussian distribution due to the initial transient phase where the infected population is rapidly increasing from the initial conditions. The adaptive KDE ABC method was able to accurately determine the correct R 0 and − γ 1 values and was consistent with the true posterior (Fig. 3c). For other R 0 values the adaptive ABC method was able to accurately approximate the true posterior and recover the true value (Fig. 3d). LF in Papua New Guinea Fitting was performed on five separate datasets of lymphatic filariasis infection including individuals' age and mf count. The summary statistic used was derived from mf count alone; mf age-prevalence and a combination of the two. Three parameters were fitted, where other parameters in the model were derived from literature estimates. Ageprevalence alone was unable to accurately determine the vector to host ratio and the probability s 2 , with wide variances for the estimates of both ( Fig. 4a and c). Using the mf count data only produced a smaller estimated range for these parameters, whilst giving a slightly wider range for the heterogeneity k (Fig. 4b). Combining both the mf count summary statistic and mf age-prevalence produces a more highly resolved marginal posterior for all three fitted parameters. Fitting just using count data in red; fitting using age prevalence data alone in black; and fitting using both count data and prevalence data in blue. (For interpretation of the references to color in text/this figure legend, the reader is referred to the web version of the article.) Discussion Individual based models abound in epidemiology due to their intuitive description and greater ease of simulating many complex aspects of a system compared to deterministic models (Auchincloss and Diez Roux, 2008). These models increasingly involve processes that may not be easily captured by an ordinary differential equation or standard stochastic processes. This presents a great challenge, however, as standard fitting techniques have been developed for more traditional models, whereas ones for individual-based models have languished (Heesterbeek et al., 2015). Although for certain models it may be technically possible to write down a likelihood, there can be huge computational or technical barriers to do this. Whether this is due to a large number of hidden states or the sheer number of components in the model, this leads to having to resort to techniques such as visual inspection to perform fitting, introducing potential biases and not having a structured way to deal with the uncertainty in the fitted parameters. What is desirable is to have a technique where we can enjoy the benefits of Bayesian fitting, such as incorporating our prior knowledge and producing samples to estimate parameter uncertainty, without the often prohibitive procedure of conceiving of and calculating a likelihood. Here we explored an ABC method as a solution for Bayesian model fitting. In particular we developed a technique that was amenable to a variety of data with minimal hyper-parameter tuning. The motivation is to provide a tool for model fitting with uncertainty quantification to a wide range of researchers, who may not have the necessary technical background to develop a full Bayesian approach with a developed likelihood. We performed model fitting using a summary statistic of the counts by approximating the distribution using a kernel density estimator. This allows fitting to be performed without explicit assumptions on the particular type of distribution the data takes as can be common with other model fitting techniques. In order to compare the accuracy of ABC for increasingly heterogeneous count data, the procedure was carried out on various data generated from a negative-binomial distribution. For high heterogeneity, the procedure was able to accurately determine the shape parameter (k), as well as the mean parameter (m). This demonstrates that this technique is capable of handling a variety of heterogeneous data and can give similar results to standard Bayesian MCMC. the technique was also able to perform well on time-series data by transforming the data into a two-dimensional point representation. This technique would appear generally applicable to other time-series data including systems that may exhibit chaos (see supplementary material). For many individual-based models a likelihood may be either computationally or analytically intractable. In these cases other methods have been proposed to overcome this issue. Using a partial rejection control scheme provides, at each iteration, a sample of particles (parameter sets) that are initially drawn from the prior, but as the tolerance decreases, these samples become more representative of the posterior. Although there are typically issues surrounding the choice of tolerances, such that the scheme is able to draw samples for the next iteration. Here, we overcome these issues by demonstrating two different schemes for choice a set of tolerances. This creates a much more efficient pipeline for fitting without the need to perform exploratory analysis of the error function beforehand (Walker et al., 2010). One of the key issues with ABC is that it is an approximation method only. If the method does not sufficiently explore the space of parameters, the technique may produce spurious results. One possible diagnosis is to check the distribution of errors that were accept for each tolerance. If the errors are not significantly decreasing then this may indicate the procedure is stuck in a local minima and the variance of the priors may need increasing. The distribution of errors for the final tolerance can also indicate whether the procedure was halted prematurely or if lower tolerances can be accepted. There is also an issue with the choice of summary statistics to be used and the number of parameters to fit to. It may be that some parameters can be estimated from independent studies, without the need to include them in the ABC procedure. It would then seem advisable to use these values either as a well-informed prior or as a point estimate as was done here. If the model is slow to evaluate, then this may also lead to practical fitting issues. Emulation methods may help to further increase the speed of fitting, by approximating the error manifold through the use of non-parametric fitting techniques such as Gaussian processes (Conti and O'Hagan, 2010;Drovandi et al., 2011). One primary advantage of ABC over other techniques is the ability to utilize a range of data within model fitting. In the example of fitting an individual-based model of lymphatic filariasis infection to PNG data a combination of summary statistics was used. We explored fitting using just count data alone, constructing an empirical probability distribution and then comparing against the model count data using the KL divergence. This summary statistic was then combined with age-prevalence data, by constructing the prevalence in a defined set of age-categories and then using a weighted sum of squares in order to take into account the number of individuals in each age-category. We found that by adding in the extra information about the age-prevalence distribution the fitting was able to better resolve some of the parameters. ABC provides a way of incorporating many different types of data into the fitting and this suggests that the full number of pertinent summary statistics should be used. Conclusion The adaptive ABC method incorporating kernel density estimation and partial rejection control is a potentially powerful tool in model fitting for epidemiological data. We demonstrate that the same methodology can fit to both macro and micro-parasitic infectious diseases, one-dimensional or two-dimensional data, and can readily incorporate a wide array of data sources. In order for this tool to be readily-available to a wide-range of researchers we have developed this as an opensource python library, including example code to demonstrate its use. Data availability All code is packaged as a python library and can be found at the following GitHub repository: https://github.com/sempwn/ABCPRC. This includes all code for generating data used in example model fitting.
8,775.2
2018-05-26T00:00:00.000
[ "Computer Science" ]
Combined internal resonances at crossover of slacked micromachined resonators The dynamics of micro-/nanoelectromechanical systems (M/NEMS) curved beams have been thoroughly investigated in the literature, commonly for curved arch beams actuated with electrodes facing their concave surface. Except for few works on slacked carbon nanotubes, the literature lacks a deep understanding of the dynamics of slacked curved resonators, where the electrode is placed in front of the convex beam surface. This paper investigates the dynamics of slacked curved resonators as experiencing combined internal resonances. The curved slacked resonator is excited using an antisymmetric partial electrode while the electrostatic voltage load is driven to elevated excitations, which breaks the symmetry of the system and affects natural frequencies and corresponding mode shapes. The axial load is tuned to monitor the ratios between the natural frequencies of different vibration modes, which induces simultaneous 1:1 and 2:1 internal resonances between the first and second mode with the third. We observe the interaction of hardening and softening bending of the fundamental backbone curves triggering various patterns of the response scenario and the appearance of coexisting regions of irregular dynamics. Introduction The activation of internal resonances in micro-and nanoelectromechanical systems (M/NEMS) has recently been the focus of a renewed interest in research studies, where they have been observed to develop complex bifurcation structures as driving the device deep into the nonlinear regime [1]. Internal resonance is mainly characterized by the energy leakage to a new vibrational mode rather than the targeted mode where the ratio between the involved modes must be commensurate [2,3]. Asadi et al. [4] investigate the occurrence of internal resonances in a nonlinear asymmetric microbeam resonator, where the engaged modes experimentally experience asymmetric M-shaped internal resonance curves while exhibiting a vigorous energy exchange until the occurrence of the drop-jump phenomenon. Samanta et al. [5] focus on the complex energy transfer arising at internal resonances in MoS 2 nanomechanical systems and experimentally capture the activation of various internal resonances exhibiting different patterns of the response curves as the drive level is increased. Kirkendall and Kwon [6] detect nested regions of multistability at internal resonance in an electrostatic crystal plate, where the ensuing nonlinear modal interaction can generate topologically distinct dynamics over the parameter space, and the increased complexity related to the internal resonance activation provides a considerable versatility of the device response. Ruzziconi et al. [7] analyze a 2:1 internal resonance experimentally arising in the higher-order modes of a MEMS microbeam, and examine the changes induced in the dynamical response, including a phase shift that can be experienced among the modes, alerting that the internal resonance may affect the occurrence of the ultimate dynamic pull-in threshold. The inherent nonlinear nature and the low damping of these miniature moveable structures present an ideal platform for activating internal resonances for fundamental analysis [3,[8][9][10] or for exploitation in various potential applications [11][12][13][14][15]. Sarrafan et al. [15] present a proof-of-concept design for a nonlinear rate microsensor, where 2:1 internal resonance is employed for its operation to improve the robustness of the response to design parameter variations. Taheri-Tehrani et al. [16] demonstrate mutual 3:1 subharmonic synchronization in a micromachined silicon disk resonator and suggest applications for the frequency-selective detection of weak signals. Pu et al. [17] investigate synchronization of electrically coupled micromechanical oscillators with 1:3 frequency ratio to design high-performance resonant sensors with better frequency resolution and larger scale factor. Zhang et al. [18] take advantage of internal resonances arising in polyvinylidene fluoride piezoelectric membrane to improve the sensitivity of a resonant mass sensor. Wang et al. [19] observe tunable frequency locking in the internal resonance of two electrostatically coupled microresonators with a frequency ratio 1:3, which is used to achieve enhancement of the frequency stability. Yang and Towfighian [20] intentionally combine internal resonances and magnetic nonlinearity to improve efficiency for energy harvesting, showing via experiments and simulations that the design outperforms the linear system by doubling the frequency bandwidth. As observed in Jeong et al. [21] and successively further investigated in Potekin et al. [22], the internal resonance triggered in atomic force microscopy by the non-smooth nonlinear tip-sample interactions provides stronger sensitivity to material composition, which allows for enhancing simultaneous topography imaging and compositional mapping. The distinctive features induced by internal resonance activations pave the way for developing novel approaches in different fields [23][24][25][26][27]. Keşkekler et al. [24] activate successive internal resonances in graphene nanodrums by regulating the drive level and show experimental evidence of a nearly two fold increase in nonlinear damping; based on these achievements, they explore the possibility of using modal interactions to controllably tune the nonlinear dissipation. Antonio et al. [25] analyze nonlinear coupling through internal resonance in a MEMS resonator and show that the contributed mode absorbs the frequency and amplitude fluctuation of the targeted mode, i.e., the oscillation frequency stability can be improved by operating the resonator in the internal resonance regime. Chen et al. [26] drive a MEMS oscillator at internal resonance and experimentally observe that the internally coupled mode coherently transfers energy back to the principal mode as the external energy supply is switched off; the coupled mode acts as an energy reservoir, leading to developing novel strategies to engineer the dissipation process. Notably, curved beams showed considerable potential for activating different types of internal resonances such as 1:1, 2:1, 3:1, and 4:1 [28,29]. The curvature could be intentionally fabricated (arch resonators) and/or induced by applying compressive stress for buckled beams [28]. Among the main characteristics of curved beams is the high tuning of their natural frequencies as tuning their axial load. More recently, several techniques were investigated to study different methods of axial tuning of arch resonators, such as electrothermal tuning, guided-electrode tuning, or shape optimization. These axially tuned resonators showed great potential for applications [30][31][32][33]. Alcheikh et al. [31] fabricate a highly sensitive and wide-range resonant magnetic microsensor based on the detection of the resonance frequency of an in-plane buckled microbeam operated near the buckling point. Hafiz et al. [32] experimentally demonstrate a reprogrammable logic device based on electrothermal tuning of the resonance frequency of a microelectromechanical arch resonator capable of performing all the fundamental 2-bit logic operations. The dynamics of initially curved MEMS resonators were deeply investigated in the literature [34][35][36]. The initially curved microbeam comprises inherent quadratic (i.e., due to curvature) and cubic (i.e., due to midplane stretching) nonlinearities, where the dominance of one nonlinearity over the other depends highly on the geometrical properties. Actuating the arch microbeam electrostatically will add an additional quadratic nonlinearity to the system [1,37]. Classically, MEMS arch resonator is actuated using an electrode facing its concave surface to mainly induce snap-through motion. Ouakad and Sedighi [34] analyze the response of MEMS arches assuming an outof-plane actuation pattern and show that the static profile can alter from symmetric shape to asymmetric one, depending on the shape and length of the stationary non-parallel electrodes. Najar et al. [35] explore the potential of electrostatic initially curved microbeams to serve as bifurcation gas sensors and investigate the feasibility of exploiting the transition from regular periodic to irregular chaotic response as a detection mechanism. Wang and Ren [29] studied the 3:1 internal resonance between the first two symmetric modes of an electrostatically actuated arch resonator. A multiple scale method was used to investigate the dynamics and the energy transfer between modes as activating the internal resonance. On the other hand, the tuning of the axial load of curved beams leads to monitoring the ratios between different vibration modes. Recently, Hajjaj et al. [10] tune the natural frequencies of MEMS in-plane clamped-clamped arch microbeams up to experience the crossover phenomenon and experimentally demonstrate the induced nonlinear interactions at internal resonances. Commonly, the MEMS arch resonator actuated using an electrode facing its concave surface leads to higher actuation voltages. Despite extensive research on different types of internal resonances in these MEMS resonators, there is a lack of characterizing them at a slack position, leading to lower actuation voltages and low power consumption. In the present paper, we investigate numerically the effects of assuming slacked configuration while we drive the system by electrodynamic voltage excitations. This configuration is characterized by the suppression of the snap-through instability comparing to classical arches, which is appreciated in a wide range of applications. One should also note that the in-plane slacked arch beam can be easily fabricated using the standard photolithography fabrication processes [32]. Half-electrode configuration is considered. The device is deliberately operated in a relatively elevated bias electrostatic voltage, which visibly breaks the system's symmetry. The microbeam is subjected to compressive axial load, which is conveniently tuned in order to modify the frequency ratios and achieve the simultaneous activation of 1:1 and 2:1 internal resonance between the first three lowest vibration modes. After analyzing the hybridization of modes, we explore the internal resonance activation, showing the evolution of the response dynamics as tuning the stiffness and passing through the crossing zone. The study is focused on the variety of patterns arising in the response of the arched-beam MEMS resonator, where the modes involved in the simultaneous 1:1 and 2:1 internal resonance are differently dominated by hardening and softening bending behavior. As increasing the voltage excitation, we investigate the emergence of regions of irregular motion along the resonant branches. Due to the intrinsic non-symmetry of the system, the device frequency response is examined by referring to different sections of the microbeam, which allows monitoring the contribution of each single mode. The paper is organized as follows. Section 2 introduces the MEMS arch resonator with slacked configuration and derives the problem formulation. Section 3 presents the tuning of the natural frequency ratios as varying both axial load and electrostatic voltage load. Section 4 explores the evolution of the 1:1 internal resonance between the first and second modes as passing through the veering zone while concurrently activating 2:1 internal resonance with the third mode. Section 5 investigates the response at elevated voltage excitations, showing quasi-periodic and chaotic dynamics. The main conclusions are summarized in Sect. 6. System model The MEMS device is modeled as a parallel plate capacitor, where the arc microbeam presents an initial curvature b C i and an initial shape b w 0 ðb xÞ (Fig. 1). The initial shape of the arc microbeam is given bŷ where b x denotes the position along the arc length L and R represents the radius of the arc. The arc microbeam is assumed to have a rectangular cross section, A ¼ bh, with a moment of inertia, I ¼ bh 3 =12, where h and b denote thickness and width, respectively. The stationary electrode faces the arc microbeam convex surface with a transduction gap d, and covers half of the microbeam length, allowing actuating both symmetric modes and the first antisymmetric one. The arc microbeam is actuated electrically via a DC bias voltage V DC and an AC harmonic voltage of amplitude V AC and frequency b X. The non-dimensional equation of motion governing the transverse vibration of the arc microbeam, w(x,t), is given by [10,33] € and the electric force term is where U x ð Þ is the unit step function defining the lower stationary electrode length and position. The arc microbeam is subjected to the following clampedclamped boundary conditions In Eqs. (2)-(5), dots denote derivatives with respect to time t, and primes denote derivatives with respect to space x. The non-dimensional variables in Eqs. (2)-(5) are and the parameters are where E is the effective Young's modulus, q is the material density, N is the axial load, b c is the viscous damping coefficient, and e is the permittivity of the gap space medium. In the literature, inducing axial load in MEMS devices is frequently used to attain large tunability of natural frequencies. In a wide range of applications, such as thermal-conductivity-based gas [30] and pressure sensors [31], and logic memories [32], the axial load was controlled by different transduction mechanisms, mainly electrothermal actuation [30] and sided electrostatic electrode [38]. Here, the nondimensional axial load, N non , is kept generic such that any mechanism that can generate axial load could be applied. By fixing the axial load N non , we simulate the dynamic response of the arc microbeam by discretizing the equation of motion [Eq. (2)] using Galerkin procedure, where the dynamic transverse vibration of the arc is expressed as where w s (x) is the static deflection, u i (t)(i = 0…p) are the non-dimensional modal coordinates, and / i ðxÞ (i = 0…p) are the mode shapes of the arc microbeam at a fixed axial load N non . The above yields to the following system of p equations of motion Fig. 1 Schematic of the slack curved microbeam electrically actuated with partial electrode configuration In the following sections, five modes are considered for the static and the eigenvalue analysis, while the first three modes are used for the analysis of the nonlinear response dynamics. An arc microbeam made of silicon will be assumed, where its dimensions are given in Table 1. Natural frequencies We analyze the variation of the first three natural frequencies of the arc-shaped MEMS microbeam as increasing the compressive axial load N non . The frequency trends at no electrostatic voltage (V DC-= 0 V) are reported in Fig. 2a. The first natural frequency (first symmetric) increases when raising the axial load up to nearly settle at about f 1 = 100 kHz at N non = 250. Concurrently, the second natural frequency (first antisymmetric) decreases up to reach the nearly constant value of f 2 = 73 kHz. As a result, we can observe the crossing phenomenon where both the first and the second natural frequency present the value of f 1 = f 2 = 75 kHz, which occurs at about N non-= 120. As expected, there is an interchange of the order of modes after crossing, while no hybridization between the mode shapes is observed [10]. If the electrostatic voltage is induced (V DC-= 50 V), the frequency trends and the corresponding mode shapes visibly modify (Fig. 2b). The crossing between f 1 and f 2 no longer develops, and the scenario turns into veering (avoiding crossing) between the same natural frequencies since systems with repetitive natural frequencies commonly show a very high sensitivity to any introduced perturbation (V DC bias in this case). As increasing the axial load N non , the first natural frequency f 1 raises up to become close to f 2 , after which its increment develops with lower inclination. The second natural frequency f 2 exhibits an initial decrement until its value becomes close to f 1 , after which f 2 increases again over the rest of the range reported in the figure. This veering phenomenon is accompanied by the hybridization of the engaged modes [39]. This aspect is analyzed in Fig. 3, where we report the mode shapes at crossing as V DC = 0 V (Fig. 3a), and at different sections at veering as V DC = 50 V (Fig. 3b-d). At crossing, we can recognize the distinct shapes of both the first and the second modes. Instead, a strong hybridization is observed at the veering zone, which leads the modes to become more similar to each other, although not identical. In addition, we can visibly observe the effect of the halfelectrode configuration, which, driven at high electrostatic voltage, visibly breaks the symmetry of the modes shapes. A further illustration of the influence of the axial load N non on the mode contribution around the veering zone is depicted in Fig. 3e. Regarding the third natural frequency, its trend initially decreases and successively restarts raising (Fig. 2b), while its mode shape is not clearly affected by the asymmetry of the electrode (Fig. 3e). In the following sections, we analyze the dynamics as passing through the veering phenomenon, where there is approximately 1:1 internal resonance between the first and second natural frequency. In addition, the third natural frequency is about twice their value, which allows simultaneously establishing 2:1 internal resonance with the previous two frequencies. Internal resonances at crossover: low excitation We investigate the dynamics of the slack arc microbeam around the crossover zone, focusing on the case where the introduction of the DC bias voltage induces the veering phenomenon. To analyze the evolution of the response scenario, we numerically sweep the excitation frequency around the first and second modes. We initially focus on low electrodynamic voltage excitation and low damping ratio, which allows examining the main aspects of the mode interactions at crossover rising in the response dynamics. In particular, we assume V DC = 50 V, V AC = 1.2 V, and f = 4.10 -5 . We analyze various sections of the dynamic response drawn at different values of axial load, specifically N non = 85, N non-= 105, and N non = 125. We report the frequency response curves representing the maximum amplitude of oscillations and the corresponding modes contribution curves, where the first, second, and third mode components are in black, red, and blue, respectively. Due to the mode hybridization, evaluating the dynamics at different microbeam position is crucial for the comprehensive analysis of the mode interaction as experiencing different types of internal resonances. Thus, the displacement of the arc microbeam is evaluated at the midpoint (x = 0.5), quarter point (x = 0.25), and three-quarter point (x = 0.75). All results are developed via longtime integration combined with shooting technique and local stability analysis based on the Floquet theory [3]. Simulations are conducted via numerical codes developed in Matlab. We consider the response dynamics as approaching the veering phenomenon, at N non = 85. The first and second natural frequency occur, respectively, at f 1 = 84.4 kHz and f 2 = 95 kHz, while the third occurs at f 3 = 185 kHz (Fig. 2b). Here, the hybridization of the first and second modes begins (Fig. 3c). At the microbeam midpoint, the contribution of the first and third modes is close to the maximum, while the contribution of the second is deviated by the hybridization from being a node. The frequency response curves at different positions of the arc microbeam with their associate modal contributions are shown in Fig. 4. The frequency response diagram at the midpoint and its corresponding modal contributions are reported in Fig. 4a1 and b1. Around both the first and second resonance frequencies, the resonant and the non-resonant branch are clearly visible. The resonance curves of the hybridized first symmetric mode present bending toward lower frequency values, denoting softening behavior (i.e., the first mode is dominated by the quadratic nonlinearity originated from the electrostatic force and the microbeam curvature), whereas the resonance curves of the hybridized first antisymmetric mode present bending toward higher ones, denoting hardening behavior (i.e., the second mode is dominated by the cubic nonlinearity originated from the midplane stretching). Both modes exhibit a wide extent of the resonant branch, which leads to a considerable range of coexistence with the corresponding non-resonant one. As shown in Fig. 4b1, as sweeping around the second mode, the contributions of mode 1 and mode 2 have about the same amplitude at the microbeam midpoint, while opposite sign; this results in a nearly mode suppression in its corresponding total response in Fig. 4a1 (although not in the other points). As evidenced at midpoint in Fig. 4b1, the dynamics of the first mode are dominated by their own c Fig. 3 Mode shapes of the three lowest natural frequencies of the arc microbeam at a crossing, V DC = 0 V and at N non = 0; bd veering, V DC = 50 V and N non = 85, N non = 105, and N non = 125, respectively, denoting hybridization of modes and loss of symmetry due to the half-electrode configuration. e Absolute amplitude of the first three normalized mode shapes of the arc microbeam at (from left to right) midpoint, quarter point, and three-quarter point for V DC = 50 V as varying N non contribution, while this mode also contributes to the dynamics around the second mode. Similar findings are shown at the three-quarter point. Furthermore, the ratio between the third and second natural frequencies remains around two leading to the activation of the 2:1 internal resonance between the involved modes. This can be confirmed by analyzing the contribution of each mode depicted in Fig. 4b1-b3. In particular, the emergence of a new resonance branch at X = 91.5 kHz for all beam positions indicates the main contribution of the third mode into the response due to nonlinear coupling via 2:1 internal resonance. At N non = 105, the response scenario rapidly changes. The first and second natural frequencies are very close, as seen from the natural frequency alteration with axial load (Fig. 2b). However, they do not coincide perfectly due to the static deflection induced by the DC bias voltage from the half Fig. 4 a1-a3 Frequency response curves at N non = 85, with V DC = 50 V, V AC = 1.2 V, and f = 4.10 -5 . b1-b3 Corresponding modes contribution response curves, with black, red, and blue denoting first, second, and third mode contribution, respectively. Dynamics at (a1), (b1) midpoint, (a2), (b2) quarter point, and (a3), (b3) three-quarter point electrode. Here, there is a strong hybridization of the two modes involved (Fig. 3c and e). While the response around the first resonance frequency continues exhibiting softening bending, the main differences occur at the dynamics around the second resonance frequency, as demonstrated in Fig. 5a1-a3. The extent of the resonant branch strongly decreases, while a band of irregular motion (represented by stars) emerges between the resonant and the non-resonant branches. In particular, at X = 92 kHz, the system passes from stable periodic oscillations to irregular ones, after which it returns to the stable periodic nonresonant branch at X = 94 kHz. Evidence of a similar emergence of irregular dynamics at crossing was demonstrated experimentally in the case of the classical arc microbeam electrostatically actuated in Hajjaj et al. [10]. Note that the emergence of ranges of irregular motions is frequently observed as driving the system at internal resonance [3]. In addition, as for the previous case at N non = 85, a hardening branch appears at X = 91 kHz, demonstrating the nonlinear coupling due to the 2:1 internal resonance. Despite this weak contribution, it is noticeable at the dynamics of the arc microbeam at all positions. At N non = 125, the interchange of modes via hybridization is almost done. The dynamics at different microbeam position are depicted in Fig. 6. The major oscillation amplitude at midpoint and quarter points is around the second resonance frequency, which corresponds to the hybridized first symmetric mode, showing softening behavior. The dynamics around this mode are more developed than the ones around the hybridized first antisymmetric one, of which we can see the onset. This is characterized by a mix of softening and hardening behavior observed in all the examined microbeam positions, To better observe the underlying dynamical behavior induced by the crossing phenomenon, we analyze two additional sections of the response dynamics performed, respectively, at N non = 105 and at N non = 115, where we consider an increased excitation amplitude V AC = 14 V, while assuming a higher damping ratio, f = 0.02. Higher damping is used to investigate the system response as being operated in air, which is vital in certain applications [28,40]. At N non = 105 (Fig. 7), there is a continuation of the resonance branch of the hybridized first mode into the resonance branch of the hybridized second mode, which induces an M-shape response demonstrating the nonlinear coupling among the first and second via 1:1 internal resonance. At midpoint and three-quarter point, the second mode mainly affects the dynamics of the second mode and contributes to the non-resonant branch around the first mode; yet, at these points, the first mode dynamics dominate the response over the entire frequency range, including at the second resonance frequency. Conversely, at the quarter point, the response of the arc microbeam is dominated by the second mode. The 2:1 internal resonance continues to be activated at about X = 92 kHz, where it occurs along the M-shape curve of resonant branches. The contribution of the third mode is characterized by a small peak. Unlike Sect. 4, the response does not split into resonant and non-resonant branches due to the high damping assumption. Furthermore, at X = 75.5 kHz, we can observe for a narrow frequency range that the first mode resonant branch evolves into period-2 oscillations, which rapidly lead to a small frequency range of chaotic responses [3]. The time history drawn at X = 77.58 kHz is reported in Fig. 8, implying the contribution of two modes in the response leading to a double period, confirmed by the corresponding Poincaré section. Along the right-hand side branch of the M-shape curve, at about X = 96 kHz, the time histories and the contour arising in Poincaré section (Fig. 9) prove the evolution of the dynamics into a quasi-periodic motion, which might be the consequence of a reversed Hopf bifurcation [3]. At N non = 115 (Fig. 10), the elevated voltage excitation leads to more complex dynamics exhibiting a mix of hardening and softening behaviors. The arch response is governed by a softening bending behavior at midpoint (Fig. 10a1), while it shows a mixture of hardening and softening at quarter and three-quarter points ( Fig. 10a2 and a3). Figure 10 suggests the starting of mode separation. It also depicts the widening of the irregular motion bands, especially around the second mode, where there is coexistence with the resonant branch. Conclusions This study presented a numerical investigation on the combined internal resonances, which may arise at crossover in a slacked micromachined resonator. A partial stationary electrode facing the arc beam convex surface has been used to electrically drive the arc resonator. A nonlinear model has been developed for the slacked arc microbeam taking into account nonlinearities associated with midplane stretching and electrostatic load. After focusing on the hybridization of the first symmetric and antisymmetric modes at crossover, extensive investigations have been conducted, where combined 1:1 and 2:1 internal resonances have been observed among the first three lowest modes. The evolution of the frequency response dynamics has been analyzed passing throughout the veering zone, showing the transition from M-shape resonance curves to more complex behavior. Focus has been placed on the appearance of regions of coexisting periodic and irregular motions. Due to the rich and complex nonlinear dynamics of arch beams, it will be interesting to investigate as part of the future work the activation of combined internal resonances involving different higher-order modes to optimize MEMS performances. Note that in the present paper, we consider only quadratic and cubic terms in the microbeam model formulation; yet there may be an emergence of higher-order nonlinearities at near-cancellation of cubic and quadratic nonlinearities [41,42], where a thorough investigation should be conducted. In the present simulations, only viscous damping has been taken into account; however, we have to mention that operating the arc microbeam at the slack configuration squeeze-film damping may also affect the system's dynamics, although investigating its influence is out of the scope of the present paper. In a nutshell, this work motivates further research to exploit dynamics and particularly internal resonances of slacked curves resonators for practical applications, such as for sensing and frequency stability, thanks to the low actuation voltages compared to the classical actuation of the arch MEMS resonators. Funding The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
6,456.8
2022-10-05T00:00:00.000
[ "Engineering", "Physics" ]
Carbofuran degradation mediated by three related plasmid systems Two carbofuran-metabolizing Sphingomonas strains, TA and CD, were isolated from soils with differing histories of exposure to carbofuran. These strains were compared with a previously described strain, Sphingomonas sp. CFO6, with regard to growth rate, formation of metabolites, and plasmid content and structure. Extensive regions of similarity were observed between the three different plasmid systems as evidenced by cross hybridization. In addition, all three systems harbor IS 1412 , an insertion sequence (IS) element involved in heat-induced loss of carbofuran phenotype in CFO6, and heat-induced carbofuran deficient mutants of all three strains correlated with loss of IS 1412 . A carbofuran deficient mutant of TA generated by induction of IS elements was complemented by reintroduction of the wild-type plasmid, confirming the presence of genes required for carbofuran metabolism on this plasmid. Carbofuran metabolism in these three strains is clearly linked via plasmids of different numbers and sizes that share extensive common regions, and carbofuran-degrading genes may be associated with active IS elements. (cid:223) 2000 Federation of European Microbiological Societies. Published by Elsevier Science B.V. All rights reserved. Introduction Genetic systems encoding metabolism of pesticides provide an attractive framework for studying development of catabolic pathways and relationships between plasmids in soil bacteria. Most pesticides currently in use have been present in the biosphere less than 40 years, yet many of these compounds are rapidly biologically degraded in soil, suggesting that soil bacteria possess e¤cient mechanisms for recruitment and assembly of novel biochemical pathways. As with biodegradation of many complex aromatic compounds, development of many pesticide-degrading pathways likely involves recruitment of catabolic genes from various sources by horizontal gene exchange mediated by plasmids, and assembly of novel pathways catalyzed by mobile elements such as insertion sequence (IS elements) elements and transposons [1,2]. An understanding of the development of pathways involved in metabolism of pesticides will provide a greater understanding of evolutionary processes in soil bacteria. We are interested in bacterial metabolism of the insecticide carbofuran (furadan; 2,3-dihydro-2,2-dimethyl-7-benzofuranyl methylcarbamate) (Fig. 1). Carbofuran was introduced in 1967 by FMC Corporation (Princeton, NJ, USA) and is used extensively in the United States, Europe and Asia [3]. It is of environmental importance due to its high mammalian toxicity (LD 50 = 2 mg kg 31 in mice) [4] and potential for contamination of ground waters. Carbofuran is metabolized by a variety of bacteria, indicating that genetic systems controlling its metabolism have either rapidly evolved de novo, or existing systems involved in the metabolism of other naturally occurring or xenobiotic compounds were recruited by the degrading organisms. Many strains capable of completely metabolizing carbofuran to CO 2 harbor multiple plasmids, although few have been extensively characterized [5,6]. We are interested in studying the development of car-bofuran-degrading pathways in soil bacteria, and in de¢ning relationships between carbofuran-degrading plasmids harbored by di¡erent strains of soil bacteria. We recently described the initial characterization of a carbofuran-degrading bacterium, Sphingomonas sp. CFO6 [6]. The genetic topology of carbofuran metabolism in CFO6 appears to be complex ; maintenance of ¢ve plasmids appears to be required for metabolism of the insecticide by this strain and extensive regions of similarity exist between four of the ¢ve plasmids. As might be expected from a system poised for rapid recruitment of foreign genes, CFO6 plasmids are rich in IS elements, with at least one (IS1412) implicated in instability of the carbofuran-degrading phenotype. Growth of CFO6 on carbofuran is rather slow, with stationary phase growth typically reached in 4 days. This slow growth might be expected of a strain carrying large amounts of plasmid DNA (¢ve separate plasmids) and a system that may not be e¤ciently regulated with respect to catabolic functions. One might expect that more e¤cient systems (including fewer plasmids required for carbofuran metabolism, and consolidation of genes into well-regulated operons) might occur with time. Two other carbofuran-metabolizing Sphingomonas strains, TA and CD, were recently isolated from soil. Our objectives for the work presented here include analysis of: (1) phylogenetic relationships between TA, CD and CFO6 ; (2) possible involvement of plasmids in carbofuran metabolism by TA; (3) similarities and di¡erences between plasmids in TA, CD and CFO6 ; and (4) possible presence of common IS elements between the three strains. Comparison of plasmids encoding similar functions, particularly those in newly evolving systems, will shed light on evolutionary processes among soil bacteria, and the development of novel metabolic pathways. Soil and sampling Soil samples (0^15 cm depth) were collected from an experimental ¢eld site near Hastings, FL, USA, after four consecutive annual applications of carbofuran. After two annual applications, this site exhibited enhanced degradation toward carbofuran (Ou, unpublished observa-tion). Soil samples were also collected from a nearby control site, which had never been treated with carbofuran. Soil samples were stored in the dark at 4³C and used within 1 month of collection. Isolation and screening of carbofuran degraders A batch culture enrichment technique [9] was used to isolate carbofuran-degrading bacteria from di¡erent soil samples. Strains TA and CD were selected due to their rapid growth on carbofuran as a sole source of carbon. Growth and mineralization of carbofuran by TA Five ml of 1-day-old bacterial culture was inoculated into a 250-ml biometric £ask (Bellco, Vineland, NJ, USA) containing 50 ml of minimal medium (BMM) [6], 50 Wg ml 31 of technical grade carbofuran, and 30 Bq ml 31 of URL or CAL 14 C-carbofuran. The side arm of the £ask contained 5 ml of 0.5 M KOH for trapping evolved 14 CO 2 . At predetermined time intervals, KOH was removed from the side arm and replaced with fresh KOH. At the end of incubation (72 h), 10 ml of culture £uid was withdrawn and used for vacuum ¢ltration through a 0.2-Wm Nylon ¢lter (Micron Separation, Westboro, MA, USA). The ¢lter was washed three times under vacuum with 5 ml of BMM. 14 C activity in the KOH ( 14 CO 2 ), washed ¢lter and ¢ltered solution was quanti¢ed by liquid scintillation counting (LSC). In conjunction with the sampling of the KOH traps, 100 Wl of culture £uid was removed and diluted with an equal volume of 0.1 M phosphate bu¡er, pH 7.2. After mixing, two drops of the diluted £uid were deposited on the counting chamber of a Petro¡-Hasuer bacteria counter, and cells were counted under a phase contrast microscope [10]. Degradation and metabolite formation Ten ml of 1-day-old bacterial culture was inoculated into a 1000-ml Erlenmeyer £ask containing 500 ml BMM, technical grade carbofuran (50 Wg ml 31 ) and URL 14 C-carbofuran (80 Bq ml 31 ). After inoculation, mineralization was monitored by the method described by Ou [11]. Metabolites present in the culture medium were determined by extraction from cell-free ¢ltrates. Filtrates (10 ml) were acidi¢ed with concentrated HCl to pH 6 2, and extracted twice with 25 ml of ethyl acetate. After removal of moisture by anhydrous sodium sulfate, the ethyl acetate extracts were evaporated to dryness by a roto-evaporator. The residues were dissolved in anhydrous methanol and concentrated under a gentle stream of N 2 gas to 0.3 ml. Carbofuran and its metabolites in the concentrated extracts were separated and quanti¢ed by TLC autoradiographic analysis and LSC as described previously by Trabue et al. [12]. 16S rDNA sequencing and phylogenetic analysis Genomic DNA was isolated from TA by a standard cetyltrimethylammonium bromide and isopropanol precipitation technique [13]. The 16S rDNA gene was ampli¢ed by PCR by standard procedures using primers 27f and 1406r according to Lane [14]. PCR products were ligated into a TA cloning vector (Invitrogen, San Diego, CA, USA) according to the vendor's instructions and transformed into Escherichia coli cells. Recombinant plasmids were puri¢ed and used as template for direct DNA sequencing with standard primers. In the case of CD strain, the genomic DNA was obtained by boiling the cell suspension in sterilized water for 5 min, and appropriate dilutions were used as templates in the PCR reaction using 27f and 1492r primers. The PCR product was cloned into pGEM-T vector system (Promega, Madison, WI, USA). The resulting sequences were assembled to produce contigs of ca. 1400 bases, which were aligned using the Pileup function of GCG (Wisconsin Package Version 10.0, GCG, Madison, WI, USA). Phylogenetic trees were constructed using maximum parsimony (Paup*4.0b2a, Sinauer, Sunderland, MA, USA) and neighbor-joining using a Jukes and Cantor method [15] with bootstrap analysis (100 replicates) in both cases. Plasmid isolation and characterization Plasmids from strains TA, CD, CFO6 and transconjugants were isolated by a modi¢cation of the method of Feng et al. [6], utilizing Qiagen mini prep columns (Valencia, CA, USA). Plasmids were further puri¢ed by CsClethidium bromide density gradient ultracentrifugation [16]. Mutagenesis and complementation Single colonies of TA and CD were inoculated into LB broth, and grown at 40³C or 41³C for 24^48 h. Colonies from the heat-treated bacteria were screened for carbofuran catabolism de¢cient mutants as described by Feng et al. [6]. Clones from heat-treated bacteria were examined for di¡erences from the wild-type strain by examination of restriction patterns resulting from digestion of the resident plasmid with BamHI. Mutant TA50 (carbofuran de¢cient and spontaneous kanamycin and ampicillin resistant) was used as recipient in triparental matings and electroporation with the wild-type plasmid [6]. Plasmid pCT001 in transconjugants was con¢rmed by plasmid isolation and Southern blot hybridization as described by Feng et al. [6,16]. Restriction enzyme digestions, blotting and hybridization Plasmid DNAs were digested with BamHI and separated by 0.7% agarose gel electrophoresis, and blotted onto Nylon membrane using alkali methods according to the manufacturer's suggestion (Amersham Pharmacia Biotech, Piscataway, NJ, USA). The DNAs were hybridized with probes generated from CFO6 or TA plasmids or IS1412 and labeled with [ 32 P]dCTP by random priming according to the vendor's recommendations (Gibco BRL, Gaithersburg, MD, USA). Isolation and identi¢cation Strain TA was isolated from soil after four successive annual ¢eld treatments of carbofuran, and CD was isolated simultaneously from soil taken from an adjacent site with no history of direct applications of carbofuran. Both strains were Gram-negative motile short rods with a single polar £agellum. When grown on LB agar, TA and CD were pigmented yellow. Both strains utilized carbofuran as a sole source of C or N for growth, and also utilized carbofuran phenol and methylamine (hydrolysis products of carbofuran) as sole sources of C for growth. In addition, both strains utilized two other carbamate pesticides, carbaryl and baygon, as sole sources of C for growth. When grown in BMM^carbofuran, a water-soluble red metabolite was formed by both strains. This red metabolite was also produced during metabolism of carbofuran by other bacteria [17], including Sphingomonas sp. CFO6 [6]. Phylogenetic analysis by 16S rDNA analysis placed TA and CD ¢rmly within the Sphingomonas group of the Kproteobacteria (Fig. 2). 16S rDNA phylogeny indicates that TA and CD are phylogenetically very similar. Sphingomonas sp. CFO6 belongs to a separate subgroup of Sphingomonas. Growth and mineralization Both TA and CD grew well on carbofuran and rapidly mineralized URL and CAL 14 C-carbofuran to 14 CO 2 (Fig. 3). Patterns of the mineralization of CAL 14 C-carbofuran by the two isolates were similar to the growth patterns. After 22 h of incubation, 85% and over 90% of the applied CAL 14 C-carbofuran were mineralized by TA and CD, respectively. It should be pointed out that when CAL 14 C-carbofuran is hydrolyzed, the carbonyl 14 C is instantly converted to 14 CO 2 , methylamine and carbofuran phenol (Fig. 1). At the end of 72 h of incubation, approximately 47% of the 14 C URL label versus approximately 85% CAL was recovered as CO 2 for TA. This distribution was similar to that observed for CD (approximately 47% versus 88% for URL and CAL, respectively) ( Table 1). The initial step in carbofuran degradation is typically hydrolysis of the carbamate linkage resulting in liberation of 14 CO 2 for CAL 14 C-carbofuran (Fig. 1), and hence no 14 C should be associated with biomass. Small amounts of 14 C were found to be associated with biomass, however. It is likely that the 14 C associated with biomass was due to incomplete washing or from impurities that might have been converted to biomass by the isolates. Both TA and CD utilized the aromatic ring for growth, as indicated by the incorporation of URL 14 C into biomass (Table 1). Approximately 24.8% and 16.3% of applied 14 C remained in cell-free spent growth media for TA and CD; the residual label was likely associated with non-metabolized carbofuran and metabolites. Both TA and CD grew much faster in pure culture than did CFO6 under similar conditions (data not shown). CFO6 requires several days growth to reach late exponential growth phase, whereas only 2 days are required for TA and CD to reach this stage, suggesting that carbofuran metabolism may be more e¤cient in these strains than in CFO6. Plasmid content and comparisons Strain TA harbors a single plasmid (pCTOO1) of approximately 100 kb as determined by restriction digestion, and CD harbors four plasmids (Fig. 4A). Note that pCD2 and pCTOO1 are approximately the same size. These plasmids are compared with those of Sphingomonas sp. CFO6, which harbors ¢ve plasmids ranging in size from 5.5 kb to over 200 kb [6]. These three systems share signi¢cant amounts of sequence similarity, including at least one common IS element (IS1412) (Fig. 4B). IS1412 hybridizes with the similarly sized plasmids pCTOO1 and pCD2, and extensive regions of similarity between restriction digestions of total plasmid DNA hybridized with pCTOO1 (Fig. 5A) and CFO6 plasmid DNA (Fig. 5B). Extensive similarity between the systems is evident from the similarly sized hybridized electrophoretic bands common between CFO6 and CD, and shared sequences between pCTOO1 and the other two systems are evident from the di¡erently sized bands hybridizing in pCTOO1. Carbofuran metabolism is mediated by pCTOO1 in Sphingomonas sp. TA In order to investigate the potential role of pCTOO1 in carbofuran metabolism, various strategies to cure TA of its plasmid were attempted. Repeated e¡orts to cure TA of its plasmid by growth at high temperatures (38^42³C) and by repeated passage on non-selective growth media failed. Repeated attempts to introduce pCTOO1 into a neutral background (Pseudomonas £uorescens M480R) with selection on carbofuran as a sole source of either carbon or nitrogen also failed, possibly suggesting that some chromosomally encoded functions may be required for metabolism of carbofuran in this strain. Mutants lacking the ability to grow on carbofuran were generated by growth at 41³C, yielding an approximately 50-kb deletion in pCTOO1 (TA50 ; Fig. 5, lane 5). Also deleted in this mutant was IS1412 (data not shown); mobilization of IS1412 was previously shown to be associated with loss of carbofuran metabolism in CFO6 [2]. These mutants failed to grow on carbofuran as a sole source of carbon and did not mineralize the aromatic ring of the pesticide, although functions encoding mineralization of the chain (carbofuran hydrolase) remained ( Table 2). The loss of numerous CFO6-like restriction fragments in TA50 suggests common sequences required for carbofuran metabolism (Fig. 5A, lanes 4 and 5). The carbofuran phenotype was recovered by mating a spontaneous kanamycin/ampicillin resistant derivative of TA50 with wild-type TA, which resulted in displacement of the mutant pCTOO1 in the transconjugant TA^TC (Fig. 5, lane 6; Table 2). Displacement of the resident pCTOO1 in the deletion mutant was also con¢rmed by electrotransformation of TA50 with pCTOO1 (data not shown). This con¢rms that at least some functions required for carbofuran metabolism are encoded by pCTOO1. Discussion Carbofuran metabolism in Sphingomonas spp. TA, CD and CFO6 is linked to a common ancestor via their resident plasmids, and it is likely that IS elements such as IS1412 were responsible for recruitment and rearrangements linking carbofuran metabolism in these strains. Arrangements of genes in CD and CFO6 are more similar to each other than either is to TA, as indicated by the numbers of plasmids and commonly hybridizing restriction fragments. Growth rates of CD and TA are more similar to each other than either is to CFO6, suggesting that the CD plasmids encode relatively small, but signi¢cant, differences in carbofuran metabolism than does CFO6. The additional plasmid DNA in CD relative to TA does not appear to adversely a¡ect its growth on carbofuran relative to TA, suggesting that the genetics of carbofuran metabolism in CD may ultimately be more similar to that in TA than to CFO6. Carbofuran metabolism in these three strains is therefore related via their resident plasmids, but signi¢cant di¡erences in carbofuran metabolism exist between the three strains. Future studies will focus on de¢ning the nature of the related and di¡erent sequences, which should help elucidate the development of carbofuran metabolism in soil bacteria. It is not possible to retrace the precise steps involved in evolution of carbofuran metabolism in these three strains, but speculations regarding the relationships are possible at this time. It is clear that genetic exchange, mediated by plasmids, allowed the passage of carbofuran genes between an unknown number of intermediate strains, resulting in common genes on di¡erent plasmids in CD, TA and CFO6. These plasmids have linked carbofuran metabolism in phylogenetically similar, yet geographically diverse strains. It is also likely that IS elements such as IS1412 were responsible for initial recruitment of the carbofuran genes from disparate sources, and for the di¡erences in structure of plasmids between the three strains. Loss of IS1412 corresponded with a loss of carbofuran metabo- lism in all mutants studied, suggesting that IS1412 is linked with these genes, and may have been responsible for their recruitment from various host DNAs. It is tempting to speculate that the core fragments shared between CD and CFO6 (Fig. 5) were subject to IS-mediated rearrangement to form TA, resulting in a single plasmid (pCTOO1). This might be expected in a soil that was subject to repeated carbofuran applications, as TA was isolated from; smaller amounts of plasmid DNA are likely to yield more e¤cient growth in soils. Knowledge of the replication functions of the plasmids in these strains will provide us with greater understanding of the relatedness of the plasmids, as will more precise mapping of the carbofuran-degrading genes and associated IS elements. These studies are currently underway.
4,228
2000-06-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Hunting for neutrino emission from multi-frequency variable sources Pinpointing the neutrino sources is crucial to unveil the mystery of high-energy cosmic rays. The search for neutrino-source candidates from coincident neutrino-photon signatures and electromagnetic objects with peculiar flaring behaviors have the potential to increase our chances of finding neutrino emitters. In this paper, we first study the temporal correlations of astrophysical flares with neutrinos, considering a few hundreds of multi-frequency sources from ALMA, WISE, Swift, and Fermi in the containment regions of IceCube high-energy alerts. Furthermore, the spatial correlations between blazars and neutrinos are investigated using the subset of 10-year IceCube track-like neutrinos with around 250 thousand events. In the second test, we account for 2700 blazars with different types of flaring phases in addition to sole position. No significant neutrino emissions were found from our analyses. Our results indicate an interesting trend showing the infrared flaring stages of WISE blazars might be correlated with arrival times of the neutrino alerts. Possible overflow of neutrinos associated with two of our blazar sub-samples are also illustrated. One is characterized by a significant flaring lag in infrared with respect to gamma-rays, like seen for TXS0506+056, and the other is characterized by highly simultaneous infrared and gamma-ray flares. These phenomena suggest the need to improve current multi-frequency light-curve catalogs to pair with the advent of more sensitive neutrino observatories. INTRODUCTION The origin of high-energy cosmic rays is one of the most important open questions for more than a century. Neutrinos are ideal messengers for tracking the origin of cosmic rays as they are undeflected when traveling through space. Since IceCube reported the detection of high-energy astrophysical neutrinos (IceCube Collaboration 2013; Aartsen et al. 2015Aartsen et al. , 2016Abbasi et al. 2021), identifying the sources of those neutrino events is one of the most pressing challenges in modern astrophysics. With no significant anisotropy found from the diffuse flux of astrophysical neutrinos, a substan- The first hint of a statistical connection between high synchrotron peaked blazars (HSPs/HBLs) and neutrinos was reported by Padovani et al. (2016). They suggested a chance probability of association between 2FHL 1 HBLs with IceCube events to be ∼ 0.4 − 1.3%, depending on γ-ray fluxes. Applying the "energetic test" presented in Padovani & Resconi (2014), Padovani et al. (2016) further reported ∼ 5 probable HBLs counterparts for IceCube neutrinos. Other searches for neutrino counterparts with samples of γ-ray blazars found no evidence of γ-ray emission associated with IceCube neutrino events (Brown et al. 2015;Palladino & Vissani 2017;Krauß et al. 2018). The importance of multi-frequency data to single out the most likely candidates for neutrino events has been highlighted in Padovani et al. (2016). Righi et al. (2019b) and Franckowiak et al. (2020) analyzed the Fermi-detected blazars located within neutrino containment regions with a multi-frequency approach. Some of those potential neutrino blazars show temporal coincidence between γ-ray flares and neutrino events, but yet there is no compelling evidence to conclude on the association between the γ-ray photons and the IceCube neutrinos. Besides, Luo & Zhang (2020) suggested that the multi-frequency selected blazar sample, 5BZCAT 2 , showed no significant correlation with the IceCube alert list. Later, Giommi et al. (2020) reported a ∼ 3.2σ correlation excess with γ-ray HBLs and IBLs 3 in the vincinity of IceCube high-energy track-like events. They identified probable γ-ray blazar counterparts for IceCube neutrinos using the VOU-Blazars tool (Chang et al. 2020), designed to find blazar/AGN candidates with multi-frequency data from the Virtual Observatory 4 . The VHE γ-rays produced inevitably during the photo-hadronic process might cascade down to lower energy due to the absorption within the source or in further interactions with the extragalactic background light via photon-photon annihilation (Franckowiak et al. 2020). The source environment of astrophysical neutrino counterparts might be optically thick to GeV γ-rays. Indeed, most of the neutrino activity has no γ-ray flare companion. Given that the pionic γ-ray photons may 1 2FHL: Second Fermi-LAT Catalog of High-Energy Sources, Ackermann et al. (2016) 2 The 5th edition of the Roma- BZCAT Massaro et al. (2015) 3 According to the peak frequency of synchrotron radiation (ν S peak ), blazars are divided into high-(HSP/HBL: ν S peak ≥ 10 15 Hz), intermediate-(ISP/IBL: 10 14 Hz ≤ ν S peak < 10 15 Hz), and low-(LSP/LBL: ν S peak < 10 14 Hz) peaked sources respectively (Abdo et al. 2010a) cascade down to X-ray band in blazar jets with strong photons fields, a stacking analysis of Swift BASS 5 objects (Goswami et al. 2021) and a time-dependent search using the position of X-ray selected blazars from 5BZ-CAT (Sharma & O'Sullivan 2021) were proposed. Recently, Plavin et al. (2020) and Plavin et al. (2021) found that the positions of radio-bright blazars are statistically coincident with arrival directions of neutrino events at 4σ level, when considering a complete fluxdensity-limited sample of radio-loud (jetted) AGNs selected from VLBI Radio Fundamental Catalog 6 . Radio emissions above 10 GHz 7 were found to increase around neutrino arrival times for those potential neutrino VLBIselected blazars. This was later confirmed by Hovatta et al. (2021) with OVRO 15 GHz light curves 8 . While Zhou et al. (2021) found no significant correlation between the same population of radio-bright blazars and IceCube 10-year track-like events in their stacking analyses. Illuminati et al. (2021) further performed a timedependent search for neutrino flares from the direction of those radio-bright blazars but found no significant ANTARES flares. The contribution of blazars to the observed astrophysical neutrino flux has been constrained by crosscorrelation joint stacking analyses between IceCube datasets and blazar samples. The stacking limits depend on the assumption that all the stacked sources have similar neutrino spectral shapes. Indexes of 2 (Huber 2019) and 2.5 (Aartsen et al. 2017a) are generally applied in the stacking analyses (Smith et al. 2021), motivated by Fermi acceleration and the spectra of diffuse neutrino flux. From the stacking, blazars' contributions are found to be 15 − 27%. Constraints obtained from other methods, such as multiplets and auto-correlation (Murase et al. 2014;Yuan et al. 2020; Bartos et al. 2021), or prediction according blazar hadronic models or observed data are all consistent with the stacking limits (Padovani et al. 2015(Padovani et al. , 2016Murase et al. 2014). All these results suggest that blazars may not be the dominant sources of the IceCube diffuse neutrino flux. Since these limits only allow constraining the average neutrino emission for sources on a long time scale, it is possible that individual sources can outshine these limits over a shorter period, such as their flaring phases (Huber 2019). As the correlation of γ-ray radiation with neutrino emissions may not be straightforward due to the cas-cades, looking for associations in other wavebands might give us interesting hints on neutrino sources. Additionally, blazar flares are suggested as promising transients for neutrino production Murase et al. 2018), and studies based on blazars' flaring properties might bring about new insight. Here we propose a series of analyses to search for neutrino emissions from flaring sources with multifrequency data. Our works consist of two parts. First, we study the multi-frequency sources inside the containment regions of IceCube alerts, investigating the correlation between flaring phases in various wavebands and the arrival time of the alerts. The second part of our analyses will focus on blazars, analyzing their light curves in two different bands: low frequency (infrared) and high frequency (γ-ray). We aim to study the correlation with neutrinos among blazars with different types of multi-frequency activities, taking into account the flaring stages from promising neutrino blazar TXS 0506+056. No previous studies have looked into the neutrino emission from blazars considering their multi-frequency light curves and accounting for correlations of multi-frequency flaring phases. In section 2, we introduce the multi-frequency behaviors of TXS 0506+056, which will be investigated throughout this paper. The neutrino data samples, the multi-frequency catalogs, and the blazar samples used in this work are described in section 3, and the selection of our source lists is shown in section 4. In section 5 and 6, we study the correlations of neutrinos with the electromagnetic flares of multi-frequency sources and blazars with different multi-frequency flaring stages. We discuss and summarize our results in section 7 and 8. MULTI-FREQUENCY FLARES OF TXS 0506+056 In Sep. 2017, the flaring state of a bright γray blazar, TXS 0506+056, was found in spatial and temporal coincidence with a 290 TeV neutrino alert, IceCube-170922A, at 3σ significance (IceCube Collaboration et al. 2018a). This neutrino event was accompanied by strong flares of TXS 0506+056 across the electromagnetic spectrum. A time-dependent search for archival neutrino flares with 9.5 years of IceCube data found a 3.5σ excess of ∼ 13.5 neutrinos in 2014-2015 from the same direction (IceCube Collaboration et al. 2018b). The multi-messenger association of 2017 alerts and 2014-2015 flares with TXS 0506+056 revealed this blazar as the first likely extragalactic high-energy neutrino source and triggered a considerable interest in the nature of counterparts of astrophysical neutrinos. The case of TXS 0506+056 demonstrates that the observed coincident activities of neutrinos and photons would greatly increase the probability of identifying the counterparts of IceCube events (IceCube Collaboration et al. 2018a). The highly variable characteristics of blazars, with flux that could increase at least a factor of two in a day, makes them play an indispensable role in finding astrophysical counterparts of high energy neutrinos. Murase et al. (2018) argue that neutrinos from blazars can be dominated by the flares in the standard leptonic scenario for their gamma-ray emission. Oikonomou et al. (2019) and Stathopoulos et al. (2021) estimated the neutrino emissions associated to γray and X-ray flaring periods of 12 Fermi bright blazars (for which simultaneous observations exist) and another 66 blazars (observed more than 50 times with the Swift X-ray Telescope -XRT, , respectively. Those works predicted the highest rates of muon neutrinos to be ∼ 1.2 − 3.0 yr −1 , concerning the X-ray flares of Mrk 421 and the γ-ray flares of AO 0235+164 and OJ 287. Considering the multi-frequency activity of TXS 0506+056 around the arrival time of IceCube-170922A, the strongest flaring period in the low-energy band (radio and infrared) and in the high-energy band (γ-rays) are not simultaneous. There is a significant lag of ∼ 300 days between the γ-ray and radio/infrared flares. Figure 1 shows the non-simultaneous infrared and γ-ray light curves of TXS 0506+056 with the time lag between flares are estimated with the Bayesian Block Algorithm (Scargle et al. 2013). The infrared data correspond to the WISE (Wide-field Infrared Survey Explorer, Wright et al. (2010)) mission, and the γ-ray data are obtained from Fermi-LAT observation (following the analysis described in section 4.2.5). In addition, PKS 1502+106 and J0242+1101 also show long-lasting 15GHz radio flares in coincidence with IceCube-190730A (Franckowiak et al. 2020) and Antares flares (Illuminati et al. 2021), respectively. Both blazars show a significant lag of radio to γ-ray flaring stages. In general, lags in different energy bands might arise from changes in the source's environment, causing different emission zones to shine in different energies. Taking the most plausible neutrino blazar, TXS 0506+056, for example, many studies have shown difficulties of reconciling both neutrino (IceCube170922A and 2014-2015 flares) and multi-frequency activities through a single emission model Reimer et al. 2019;Rodrigues et al. 2019;Petropoulou et al. 2020). A singlezone lepto-hadronic model might be able to describe the contribution from the high-energy alert (Ansoldi et al. 2018;Keivani et al. 2018;Cerruti et al. 2019;Gao et al. 2019;Righi et al. 2019a), but the modeling requires a subdominant hadronic component and assumes the presence of sufficient photons with right energy from the external field. Moreover, no single-zone scenario can explain the high neutrino flux from 2014-2015 flares and -at the same time-satisfy the constraints from the simultaneous spectral energy distributions (SED). The neutrino flares, surprisingly, were not accompanied by any photon flare (given all the observed information, Ice-Cube Collaboration et al. (2018b)), indicating that not all the neutrino emission is necessarily correlated to electromagnetic activity, especially in γ-ray (Halzen et al. 2019;Kun et al. 2021). Inspired by the time lag and the remarkable characters of TXS 0506+056, we try to recognize and compare the high stage in both low-frequency (infrared/radio) and high-frequency (γ-ray) light curves for selected blazar samples, and identify sources with multi-frequency activity similar to that of TXS 0506+056. By investigating and selecting blazars like TXS 0506+056 according to their multi-frequency flaring stages, we might be able to identify promising neutrino counterparts effectively. A number of analyses are performed throughout the paper, and there are combinations of source samples that are built with multiple selection criteria. In Table 3, we summarize all the analyses and source lists as well as their corresponding selection and motivation. 3. MULTI-MESSENGER SAMPLES AND DATA IceCube data samples IceCube has been announcing high-energy neutrino alerts since the spring of 2016, bringing about a total of 67 real-time alerts up to the end of May 2021, with a relatively high probabilities of those events being of astrophysical origin. Those alerts passed the selection criteria of the real-time alert system (Aartsen et al. 2017b) and generally have energy ≥ 100 TeV. Another 35 archival events from 2010 to 2016 fulfill the same criteria before the operation of the real-time alert system, summing up a total of 102 events considered in our analyses. The lists of real-time and archival alerts/events are taken from the Gamma-ray Coordinates Network (GCN) / Astrophysical Multimessenger Observatory Network (AMON) Notices and IceCube website 9 . Apart from high-energy neutrino alerts and archival events which would have qualified as real-time alerts, IceCube published a sample of track-like neutrino events collecting 10 years data (from 2008to 2018, IceCube Collaboration et al. 2021) that was assembled for neutrino point-source searches and used in the IceCube's 10-year time-integrated point-source analysis (Aartsen et al. 2020). This sample covers a broader energy range than the high-energy alert list, containing events with E < 100 TeV. To study only events with a higher probability of being of astrophysical origin and that are well reconstructed, we required events with reconstructed energy 60 TeV (which is the same cut applied in Padovani et al. 2016) and with angular uncertainty ≤ 5 degrees. The cuts result in 250,821 well-reconstructed high-energy track events studied in this paper. Blazar samples Three blazar samples are used in our studies. The 3HSP: The 3HSP catalog is a multi-frequency selected sample of 2013 HSP and HSP candidates (Chang et al. 2019). The 3HSP catalog is currently the most extensive and complete HSP catalog and an ideal sample to study the statistical properties (such as completeness, evolution, etc.) of blazars. In this study, we further consider 78 extra HSPs which should be included in the 3HSP catalog in the future. The updated version of the 3HSP catalog, which contains 2081 sources, is currently available through the Virtual Observatory 10 . A second and more complete version of the 3HSP catalog will be published soon. The 5BZCAT: We also selected blazars from the 5BZCAT catalog (Massaro et al. 2015), which consists of 3561 robust blazars, all confirmed via optical spec-troscopy. Even though the 5BZCAT is a compilation of blazars found by many different methods and thus not a complete sample, it is the largest catalog of confirmed blazars with optical spectral observations. The WIBRaLS: The last samples we use are the WISE Blazar-like radio-loud sources, named WIBRaLS and WIBRaLS2 catalogs 11 (D'Abrusco et al. 2014(D'Abrusco et al. , 2019. Those catalogs contain a total of ∼ 12415 blazar candidates and are the largest samples of their kind to date. The WIBRaLS catalogs are samples of infrared selected radio-loud blazar candidates with WISE (Wright et al. 2019) mid-infrared colors similar to that of confirmed γ-ray blazars. In addition, there are 18 Fermi 4LAC-associated blazars ) within the containment regions of the IceCube alerts that are not cataloged in 3HSP, 5BZCAT, or WIBRaLS. Among those 18 sources, 11 are related to alerts which have relatively large angular uncertainty and were not considered in our analyses. In total, we have collected a meta-blazar sample with 15424 blazars and blazar candidates from three extensive blazar catalogs and 4LAC. Multi-frequency data Here we describe the multi-frequency data used in this paper, from millimeter radio up to γ-ray. Millimeter Radio: The millimeter multi-epochs data were obtained from ALMA Calibration Catalog (ACC Bonato et al. 2019), which is an astronomicalmeasurement database of calibration sources that are mostly bright blazars observed in seven different bands (ranging from 84GHz to 950GHz). We used the band 3 data (84-116 GHz) to describe most of our sources. For 16 sources with Fermi counterparts in our meta blazar sample, we consider millimeter data other than band 3, with preference to band 4 (125-163GHz), band 6 (211-275GHz), and band 7 (275-373GHz). The ACC light curves have different time intervals varying from days to years between May 2011 and July 2018. For those observations with time separation smaller than 15 days, we combined the bins and took their average value. Infrared: This paper used 4.6µm WISE W2 infrared light curves from AllWISE Multiepoch photometry dataset (AllWISEMEP, WISE Team 2020a) and NEOWISE (Near-Earth Object WISE, Mainzer et al. 2014; WISE Team 2020b) data release, with observing time ranging from January 2010 to December 2020. The WISE light curves have a large time interval (of several hundred days), and the observations are usually cen-11 In this paper, we use WIBRaLS to represent both WIBRaLS and WIBRaLS2 catalogs tered in 1-2 days along several months. Thus, we combined the infrared data with time separation smaller than 15 days, averaging the signal and removing the outliers with flux values that lie outside 3 − 4σ of the mean. Those with signal-to-noise ratio ≤ 2 are also removed. After the combination, the mean interval is roughly 180 days. Moreover, there is a break between 55600 to 56500 MJD which does affect the identification of relevant flaring periods, resulting from the gap between the AllWISEMEP and NEOWISE surveys. We manually added artificial points 180 days after the beginning and before the end of the break to remedy the data gap. The flux and error of the artificial points are based in the average infrared flux for each source. X-ray: We consider the 3 keV multi-epoch observation data from Swift XRT, covering December 2005 to October 2020. The data is based on blazars frequently observed by Swift and was made available in Giommi et al. (2019) and . Similar to the data pre-processing in millimeter and infrared, the Xray observations with time separation smaller than 15 days were combined and averaged. γ-ray: The γ-ray data is retrieved from the aperture photometric light curves 12 of Fermi-LAT 4FGL-DR2 catalog (Ballet et al. 2020). Those aperture light curves are binned evenly with 30 days intervals since June of 2008. By using 10-year-average photon indexes in the 4FGL-DR2 catalog, we converted the photon fluxes of the aperture light curves from 0.1-200 GeV to 0.8-200 GeV energy fluxes in units of erg cm −2 s −1 , focusing on the high energy band to avoid the contamination from nearby sources due to the large point spread function at lower energy. Multi-frequency sources in IceCube alert regions The first samples we will study in this paper are multifrequency sources in the vicinity of IceCube alert regions. Starting from the four multi-frequency catalogs described in section 3.3, we selected the sources located in δ < |40 • | and b > |10 • |, with similar location distribution concerning most IceCube alerts and to avoid the complicated Galactic plane region. Given that we want to study the correlations between the sources' flaring activity and IceCube alerts, selecting sources with a higher probability of being variable is more effective. That is, for infrared sources, we took only those with a counterpart in the infrared-selected WIBRaLS catalog and those with 1.4 GHz flux ≥ 100 mJy. By applying these cuts, we obtained only bright and blazar-like infrared sources, as blazars are the most variable sources in the infrared sky. For γ-ray sources, only those with variability index > 18.48 are selected. The Fermi-LAT team applies the variability index obtained with the likelihood ratio test to evaluate the variability of a γ-ray source, considering the fluxes in several time intervals. In the 4FGL-DR2 catalog, a value greater than 18.48 indicates < 1% chance of being a steady source. We note that the X-ray and millimeter catalogs we used are mainly for blazars, and the selected soures are already highly variable. Further criteria are applied to the uneven-binned Xray and millimeter samples to remove the ones with insufficient data. Specifically, only millimeter and Xray sources with more than five detections at different epochs are considered. All the above criteria combined lead to a selection of 445 ACC, 3179 WIBRaLS, 876 XRT, and 992 4FGL-DR2 sources , with 21, 214, 44, and 93 of them in the containment region of IceCube alerts, respectively. Here we applied a factor of 1.1, 1.3, and 1.5 to the 90% containment region of the alerts, taking into account that 10% of the candidates expected to be outside of the 90% containment area and the possible systematic errors ). The optimal factor was obtained for each multi-frequency catalog via a simulation, which is described in section 5.1.1. Blazars with non-simultaneous multi-frequency flares The second part of our source lists is made of blazars with multi-frequency activity similar to that of TXS 0506+056 (see section 2 and Figure 1). This section describes our selection steps to obtain a sample of such blazars. Our steps are summarized as follows: We begin by cutting our meta-blazar sample considering two different wavebands of light-curve data. Given the completeness and accessibility of multi-wavelength light curves, here we focus on objects with both Fermi 4FGL-DR2 and WISE multi-epoch data, to study the flaring behavior in γ-rays and infrared. Among the meta-blazar sample obtained in section 3.2, only 2700 sources have both γ-ray and infrared light curves available. We call these 2700 blazars the "Fermi-Infrared Blazar Sample (FIBS)" with 1011, 1622, and 1547 sources in the 3HSP, 5BZCAT, and WIBRaLs catalogs, respectively. There are 1335 FIBS sources cataloged in at least two blazar catalogs, and 152 of them are in all three catalogs. The IceCube Collaboration has been searching for neutrino excess from several lists of objects (Abbasi et al. 2011;Aartsen et al. 2013Aartsen et al. , 2014Aartsen et al. , 2017cAartsen et al. , 2019Aartsen et al. , 2020. Among them, there are eight blazars in FIBS for which the correlation with astrophysical neutrinos that has reached the significance of the order of p-value ≤ 0.05 for at least one of the IceCube all-sky pointsource searches. Moreover, 90 out of 2700 FIBS sources are located within the 90% containment region of 102 IceCube high-energy alerts (section 3.1). TXS 0506+056 blazar is known for having a relatively high significance in IceCube point-source analysis and by its direct association with a track event IceCube-170922A and the neutrino excess in 2014 − 2015 (IceCube Collaboration et al. 2018a,b;Padovani et al. 2018). The 97 sources either in the alert containment regions or associated with a weak neutrino excess signal (at 95% significance level) are called potential neutrino sources throughout the paper. In search for potential neutrino blazars, the correlation between radio and γ-ray flares is also of interest. Especially, the promising neutrino blazar TXS 0506+056 shows a significant lag between the radio and γ-ray flaring phases. However, there is currently no other public and long-term monitored radio data available. The data from the millimeter ACC catalog is the best option for us, even though the data do not cover the full time range of Fermi light curves and sometimes are triggered by high state in other wavebands. In our meta-blazar sample, there are 504 sources with both Fermi and ACC multi-epoch data available. Multi-frequency flaring stages Our next step is to identify flares from the light curves of FIBS and Fermi-ACC sub-sample. A flaring period can be objectively identified through the Bayesian Blocks Algorithm, aiming to find the optimal segmentation of the data in the observation interval (Scargle et al. 2013). In this work, we use Bayesian Blocks astropy implementation 13 to detect statistically significant variations in multifrequency light curves with more than two observations at different time epochs. We chose the prior which makes the algorithm sensitive to variations that are significant at 99% confidence level (a false alarm probability 0.01), identifying only strong and clear flaring episodes and avoiding tentative flares. If the algorithm recognizes no flare for a given light curve, we lower the confidence level to 95% to search for potential weaker flaring periods of the source. Our purpose is to select as many sources like TXS 0506+056 as we can. In Abdo et al. (2010b), the "bright state" for observations in a light curve are defined as F i −σ i > F i +1.5×S , where F i and σ i represent the flux (density) and error of each detection of the light curve, while F i and S are the mean and standard deviation of the light curve. Following and slightly adapting the above criterion, we keep those blocks with F blocks > F quiescent + 1.3 × S as in flaring state. F blocks is the average flux (density) in a block, and F quiescent is the quiescent flux (density). The quiescent flux of a source is the mean flux value of the faintest ∼ 30% points detected in the quiescent phase. We excluded the detections with the lowest flux values when estimating the quiescent flux as they might be outliers with low statistics. The quiescent flux is used in this paper since that the average flux F i sometimes could be twice as bright as the quiescent state, especially for a highly variable source flaring in most of the observing period. Besides, considering that we are comparing the average flux for multiple detections in a block, the flaring block are selected with 1.3 × S instead of the factor of 1.5 in Abdo et al. (2010b). We note that sometimes for the infrared light curves, the F quiescent is much lower than the mean value due to the variability and quality of the WISE data, and we further require F blocks > F average +S to be considered as in flaring state. Once the bright phases in both the high-energy and low-energy light curves of each blazar are identified, the "multi-frequency epoch" of a blazar could be divided into three flaring stages: high energy flaring, low energy flaring, and simultaneous flaring stages. Sources with no flare identified in both light curves are not considered. The high energy flaring stage is defined as the timeinterval with brighter flux in the high energy light curve but relatively low flux (quiet state) in low energy; and the low energy flaring stage is defined vice versa. simultaneous flaring stage refers to time intervals with both high-energy and low-energy flux in the high state. We define sources with no simultaneous flaring stage as "Non-simultaneous Flaring Sources" (NFS). On the contrary, sources with simultaneous flaring stages longer than half of the whole infrared and γ-ray flaring time are considered as "Correlated Flaring Sources" (CFS). An example of the infrared and γ-ray light curves for sources in CFS control group are illustrated in Figure 2. Non-variable sources and uncertain sources We removed cases with no significant variability or cases where the light curves resemble small fluctuations close to background levels. Those potential neteutrino sources are not dropped out. Non-variable sources: For sources with Fermi variability index ≤ 18, which are treated as non-variable, we exclude the ones with Normalized Excess Variance 14 σ NXS < 0.0001 or with F max blocks /F min blocks < 1.7. The ratio 1.7 is roughly the median of Fermi non-variable sources. Uncertain sources: If the ratio of standard deviation between the non-flaring phases and the whole observing time of a source is less than 0.9, it is regarded as ambiguous source with extremely fluctuating light curve. The standard deviation ratio value of 0.9 is approximately the third quartile of whole FIBS sources. We removed a total of 886 FIBS sources in this step. In the end, only 1353 FIBS sources remained to be selected in the next step. Selection of non-simultaneous flaring sources where F i and σ i represent the flux (density) and its associated error in time bin i, and S is the flux standard deviation concerning the entire light curve. As we have removed non-variable and uncertain sources, we begin to select those blazars with low-energy flares that are not simultaneous with high-energy flares, like TXS 0506+056 ( Figure 1). Once there is no simultaneous flaring stage between low and high frequency flares, we directly selected those with long radio/infrared flares that happen after rapid high-energy Fermi flares and low-energy flaring lag smaller than 500 days. If the simultaneous flaring stages in a source only consists of a small fraction (usually not occupied by more than one-fourth of the successive flaring period in two light curves), the source is still considered as good candidates and are picked when relatively long low-frequency flares follow short high-frequency flares. On the contrary, we remove those cases where the majority of the multi-frequency flaring periods are simultaneous and complex (i.e., when there are multiple γ-ray flares contained inside a single infrared flare) because these multifrequency behaviors are not the ones we see for the TXS 0506+056. The above criteria led to a selection of 171 and 21 TXS-like sources from the FIBS sources and the ACC-Fermi sub-sample, with a significant lag of infrared and millimeter flares with respect to the γ-ray flares. We note that these two lists are not the final samples as they are selected based on the Fermi aperture light curves. Further refining of these two preliminary source lists will be discussed in the next sections. Likelihood γ-ray analysis In previous sections, we have obtained two preliminary lists of sources with multi-frequency flaring activity like TXS 0506+056 using Fermi aperture photometric light curves. However, aperture photometry is not suitable for detailed scientific analysis. The aperture light curves were only used to screen a large amount of FIBS sources in the first place and reduce the extensive computation tasks for thousands of γ-ray light curves via likelihood analysis. Additional confirmation and filtering of those pre-selected sources are required to make sure our selection was robust. We hence performed a binned likelihood light-curve analysis with the Fermi Science Tool 15 for all pre-selected sources obtained in last section. Taking the same time interval as the aperture γ-ray light curves, we applied a 30-day bin in the Fermi likelihood analysis from MJD 54682 to 59257. The energy range covers from 900 MeV to 500 GeV, focusing on a slightly higher energy range when compared to the aperture light curves. A higher energy band was chosen mainly to reduce the computation load and avoid 15 https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/ contamination at lower energy. We scanned for three different pivot energies (1.0, 2.5, and 5.0 GeV) for every pre-selected source and time bin and kept the pivot energy that leads to the lowest uncertainty associated with the normalization (N 0 ) and photon index (Γ). For time bins where the test statistics (TS) are very similar for all pivot energies, we kept the ones with a higher signal-tonoise ratio. For some particular cases where the likelihood analysis did not converge, an extra run with pivot energy of 2 GeV was tried. Detections with TS ≤ 5, Flux err ≥ Flux, or Photon Index(Γ) > 5.5 are considered as upper limits, and the limits of energy fluxes are set to 3 × 10 −14 erg s −1 cm −2 . This value was applied to represent an average upper-limit level for our sources since most of the the detections deemed as limits have energy fluxes smaller than 3 × 10 −14 erg s −1 cm −2 . We reevaluated the flaring phases of our pre-selected TXS-like sources from FIBS and ACC-Fermi sub-sample with these new light curves one by one. This procedure results in 32 (11) sources that remained in the infrared-(millimeter-) γ-ray sub-sample. To avoid missing possible neutrino emitters, we adopted a slightly different criteria to help identify sources with relatively weak gamma-ray flares. We rerun the Bayesian Blocks Algorithm assuming the gamma-ray flux upper limits to be of 10 −14 erg s −1 cm −2 . In this way, the source 3HSPJ064850.5-694522 was identified at the border of our main selection criteria, and therefore added to our TXS-like sample. A large number of sources removed might be a consequence of the large flux fluctuations associated with Fermi aperture light curves (especially for faint sources, where low photometric fluxes are usually associated with non-detection in the likelihood analysis, ending up flagged as flux upper limits). Contamination of the photons from nearby sources in the aperture light curves at low energy and the gap between MJD 55600 to 56500 in WISE multi-epoch data, additionally, would cause bogus selections. We note that some of the γ-ray flares from the dedicated light-curve analysis sometimes are associated with sources that produced only 1-2 detection(s) in the entire light curve. However, given that we have scanned the pivot energy with several values, carefully removed the problematic detections, and objectively and systematically identified the flaring phases with Bayesian Blocks Algorithm, the detected variation/flares could be considered as robust. Lists of selected blazars and control groups Finally, we have a total of 32 and 11 sources with TXS-like multi-frequency activity selected from FIBS and from ACC-Fermi sub-sample, respectively. To check whether those selected sources are more probable neutrino emitters concerning the remained ones, we took NFS and CFS samples (section 4.2.2) selected from FIBS as control groups. We note that some sources without simultaneous flares in two different energy bands have already been selected as TXS-like sources, and they are not included in the control groups. Sources with extremely ambiguous flares are removed from the control groups as well. Our control groups, in the end, contain 409 NFS and 62 CFS among FIBS sources. The four groups of sources are totally independent. Given that the ACC multi-epoch data is much more sparse than the WISE and Fermi light curves, it is meaningless to search for highly overlapped millimeter and γ-ray flares, as well as orphan flares. Therefore, there are no control groups for the millimeter-selected sample. We note that the selection concerning the ACC data suffers from a heavy incompleteness, and the 11 sources selected here are just meant to compare the results from millimeter with those from infrared. The final number of sources in each selected list and control group are shown in Table 2, and the full list and all the light curves (as well as the Bayesian Blocks results) of selected TXS-like sources from the FIBS are illustrated in the Appendix A and B. TIME CORRELATIONS BETWEEN MULTI-FREQUENCY FLARES AND THE NEUTRINO ALERTS Our first analysis is to explore the correlations between the flaring period of multi-frequency sources and the arrival time of the neutrino alerts. Time-dependent analysis To study if the bright state epochs are consistent with the arrival time of the IceCube alerts, we follow the methods in Plavin et al. (2020) and perform a time-dependent analysis for multi-frequency sources (ACC, WIBRaLS, XRT, and 4FGL-DR2, selected in section 4.1) inside the IceCube alert regions. Given that most ICeCube alerts are close to the Celestial equator and to focus on extragalactic objects, here we consider only 80 IceCube alerts with b > |10 • | and δ < |40 • |. In the time-dependent analysis, an observable R(t = 0) was defined as the ratio of the average flux (density) within the neutrino time window (∆T ) and outside this time window. where t = 0 means that the observing times of the multifrequency data were not shifted manually. ∆T is the time window around the detected time of the neutrino alert that contains the source which is being considered. Higher R(t) values imply that flares tend to occur within the ∆T of arrival of IceCube alerts. We normalized the multi-frequency fluxes by the quiescent flux (average flux of faintest 30% detection for a source, see section 4.2.2) of each source to avoid bias from different sources. The normalized fluxes are defined as "flare levels", which are the ratio between the flux of each detection and the quiescent flux. Defining the time window ∆T values The ∆T is determined by Monte Carlo simulation with the smallest p-value from several arbitrary trial values for 4FGL and WIBRaLS sources. For each ∆T trial value, we scrambled the R.A. of the IceCube alerts 10,000 times using R(t = 0) as the reference test statistic, and the p-value is the probability that shifted alert positions yield a higher test statistic. As described in section 4.1, we enlarged the 90% containment region of the alerts with several trial factors, 1.1, 1.3, and 1.5, to account for the systematic error of the alerts in the simulation. In case the enlarge factor improves the significance of correlations, it will be further used for the time-dependent analysis. According to simulation results in Figure 3, we selected ∆T = 570 days for WIBRaLS sources and ∆T = 22 days for Fermi 4FGL-DR2 sources. The most significant signal was obtained when extending the containment region with a factor of 1.3 for the WIBRaLS sample and a factor of 1.5 for the 4FGL sample. We note that the minimum and second minimum values in the curve of factor 1.5 for the 4FGL sources are very close to each other. This minimum is also close to the one in the curve with factor 1.1, which seems to converge more reasonably. Therefore, we also perform the analysis considering an extension of factor 1.1. For ACC and XRT sources, we used the medium time interval of roughly 30 days and did not enlarge the 90% containment regions. The simulations do not converge well for them since their light curves are not equally binned, and those factors for the containment region did not lead to more significant results. Finally, we consider 21, 165, 44, and 93 ACC, WIBRaLS, XRT, and 4FGL-DR2 sources within the contaminant regions of IceCube alerts, though with different extended factors of the regions. Sliding the observing time To test the correlation between electromagnetic flaring times and neutrino alert times, the observing times of multi-frequency sources are shifted with a time parameter t, while the neutrino arrival time and time window remain the same. The correlation could be demon- strated once the highest flux ratio R(t) is centered at the original observing time t = 0. The observing times were shifted between −3600 < t < 3600 days, roughly spanning 10 years before and after the real observation, with shifted bin roughly equal to half of the time window ∆T . We note that the shifted bin of infrared data is 360 days, not half of the ∆T , given that the simulated time window for WIBRaLS sources is too large. IceCube has observed neutrino data and published alerts for around 10 years, including "alert-like" archival neutrino events before the real-time alert system was established. By choosing to shift the light curves from 10 years before the actual observing date to 10 years after that, the latest multi-frequency observation would be shifted through the time window of the first IceCube alert. The maximum R(t) value might be at the center (t = 0, without shifting the observing time) by chance. We estimated the chance probability of flares coinciding with the alert arrival time with control samples via Monte Carlo simulation. The control samples were built by randomly selecting the same number of sources as the experimental sample from those outside the alert region and assigning the "alert time" for each randomly selected source. We repeat all the steps above with the control samples and obtained randomized R(t), simulating the ratio of average flux (density) inside and outside the time window ∆T with random sources and alert times for 10,000 times. The randomized R(t) will be used to estimate the significance of the correlation between electromagnetic flares and IceCube alerts. Correlations with IceCube alerts The temporal correlation between neutrino alerts and multi-frequency flares are shown from Figure 4 to 7. The X-axes are the delays of the multi-frequency detections, and the Y-axes represent the gathering of flares around the neutrino time window ∆T . To investigate the variation of the correlation concerning the strength of the flares, we tested the relationships for several flare levels cuts. We selected the flare level values of 1.5 and 2.5 as our flare-criteria, simply because they are roughly the mean and 1σ upper limit of the multi-frequency light curves' variability. The R(t) does not center at any time lag value in Figure 4. We do not find any evidence suggesting a correlation between the millimeter flares and neutrino alerts, but it should be noted that the millimeter light curves are not equally binned concerning some of the detections taken from the target of opportunity observations. According to Figure 5, the maximum R(t) value (apart from the peak at around 3000 days) is at the time lag t = 0 for strong flares with flare level ≥ 1.5 or ≥ 2.5, implying that the strong infrared flares might be correlated with the neutrino arrival time. However, the correlation is not significant as the highest R(t = 0) is just above the 1σ statistical trial error. We then removed the two promising neutrino blazars, TXS 0506+056 and PKS 1502+106, to evaluate if the possible association is driven by these two blazars. Without the two neutrino blazars, we could still tell the R(t = 0) peak from the figure, suggesting that the possible trend is not driven by them. Other blazars might also play a role in the association between infrared flares and neutrino alerts. We note that the R(t) peak at t ∼ 3000 days is too far away from the neutrino arrival time and thus we do not consider it to be associated with the neutrino events. This peak is caused by the flares from only one source There are R(t) peaks around 2000 and 3000 days before the neutrino arrival time in Figure 6. However, no peak stands out around the central region, which might be more interesting when considering the association with the neutrino alert. Our results suggest that the correlation between X-ray flares and neutrino events is not obvious, and probably the results are affected by the poor time coverage and uneven time interval of the Swift XRT data. A further investigation with dedicated X-ray time coverage might be fruitful in the future. suggests that the γ-ray flaring periods are not apparently correlated with the arrival time of the neutrino alerts. In the upper panel, several sources with extremely bright flares dominate the results and cause the peaks several hundred days delayed and roughly 1500 days ahead of the neutrino alerts. We have no evidence that the peak at t = −1500 days is related to neutrino emissions, but the possible correlation at t ∼ 300 days might be more likely connected to a hadronic process. However, no R(t) peak found after removing the few brightest sources in the figure. The peaks are also rendered when we consider more sources with a larger area of the neutrino containment region, and there are several weak peaks in the lower panel. This indicates that the systematical association between γ-ray flares and neutrinos might not exist, and the hadronic emission in γ-ray is complicated and might depend on sources. Among our time-dependent analyses for different multi-frequency samples, only the infrared one show some interesting sign of possible correlation with neutrinos. We estimated the significance level of the correlations by calculating the probability that the randomly selected sources and assigned alert times (from the con-trol samples) could lead to higher R(t) values only by chance. The chance probability is illustrated as pre-trial p-value in Figure 8. We did not consider the trial factors for the pre-trial values as the correlation between infrared flares and IceCube alerts is far from significant. As shown in Figure 8, the smallest pre-trial p-value close to the neutrino arrival time (t = 0) is 0.1. SPATIAL CORRELATIONS BETWEEN BLAZARS AND THE NEUTRINO-TRACK EVENTS Another part of our analysis aims to study the correlations of neutrinos with different subgroups of blazars divided by their multi-frequency activity. Three tests are carried out in this section. The first two tests are performed for the infrared-γ-ray-cut FIBS sources to investigate the overall multi-frequency flaring correlations and simultaneity of the whole population of blazars. The purpose is to compare the multi-frequency properties of typical blazars and those that might associate with Ice-Cube neutrinos and understand the potential bias in our analyses. In the last test, we applied our selected lists of TXS-like blazars (see section 4.2) to study the neutrinoblazar correlations w.r.t. electromagnetic flaring stages. Multi-frequency correlations of blazars We study the average time lag of the infrared flares with respect to the γ-ray ones for all the FIBS sources to check if the similar time lag seen in the TXS 0506+056 is a common phenomenon among blazars. Crosscorrelation between two multi-frequency light curves for a source could be estimated with Discrete Correlation Function (DCF). The time lag for each source is retrieved from the time bin with maximum crosscorrelation value. Here, the time lag is also evaluated with the fitting of DCF cross-correlated matrix, and we took the Gaussian mean as a representation of the time lag of the two light curves. Figure 9 illustrates the distribution of time lag of flaring in infrared compared with the activity in γ-ray for FIBS sources. According to the figure, the time lag obtained directly with the time bin with maximum crosscorrelation value substantially centers at bin 0-30 days, while that obtained with the Gaussian fitting is more scattered but still gathers around the zero. On average, there is no significant delay of infrared to γ-ray flares for most blazars. Blazars like TXS 0506+056, with an infrared time lag of ∼ 300 days, represent only a tiny fraction of the total FIBS sources. Simultaneity of multi-frequency flares of blazars In this section, we present a comparison between the multi-frequency flaring phase for blazars that are probabably related to IceCube neutrinos and that for the entire blazar population. Blazars within 90% Ice-Cube alert containment regions or associated with a weak neutrino excess signal (p-value ≤ 0.05) in Ice-Cube point-source analyses are considered as potential neutrino sources (section 4.2.1). We would like to test whether the high-energy and low-energy flares of those potential neutrino blazars are more likely to be simultaneous or "orphan". Figure 10 shows the distribution of the ratio of the simultaneous flaring stages to the whole flaring period in infrared and γ-ray (simultaneous flaring time ratio) for FIBS (sub-)samples. To simplify, here we called those with ≥ 95% significance level in IceCube's point-source searches "IceCube Warm Spots." On the other hand, sources within the containment regions of IceCube alerts Figure 10. Distribution of the simultaneous flaring period (overlap flaring time) ratio in infrared and γ-ray. The blue bars indicate the FIBS sources with both flaring in infrared and γ-ray, and the orange and green bars are the "IceCube Warm Spots" and "IceCube Region" sources, respectively. See text for more details. are defined as "IceCube Region" sources. The flaring times are defined with the Bayesian Blocks Algorithm, and details of how we defined the simultaneous flaring stages in a light curve are written in section 4.2.2. We note that here we assumed the 90% IceCube containment region is a rectangle instead of an approximate ellipse to speed up the simulation. Besides, only sources with flares in both WISE and Fermi light curves are plotted in the histogram. The number of FIBS sources and potential neutrino sources considered in this test are a bit different, with 2239 infrared and γ-ray flaring blazars among FIBS, 8 "IceCube Warm Spots", and 88 "IceCube Region" sources applied. From Figure 10, it is clear that the distribution of the simultaneous flaring time ratio for "IceCube Warm Spots" is substantially different from that of typical FIBS sources and "IceCube Region" sources, suggesting that those with a weak neutrino signal in IceCube's previous point-source analyses tend to have simultaneous flares in infrared and γ-ray. The significance levels are estimated using Monte Carlo simulation. For "IceCube Region" sources, we randomly scrambled the RA of IceCube alerts N = 10 5 times and counted the number of iterations with a simultaneous flaring period longer than the original RA, and the number is denoted M. The p-value is then defined as (M + 1)/(N + 1). While for "IceCube Warm Spots", in each iteration, we randomly selected the same number of insignificant sources as the ones with 95% significance level instead of shifting the RA. Simulation results are illustrated in Figure 11 and Table 1. In the simulation, we further divided the "IceCube Region" sources by their distance to the reconstructed center of the alerts, trying to study the influences of the distance to the alert center on the significance level. Among 88 sources with both infrared and γ-ray flares in their light curves and within the IceCube 90% containment region, 14 of them are closer than one degree to the alerts' center. Figure 11. The ratio of orphan flares versus simultaneous multi-frequency flares with observed and simulated data. The stars represent the averaged values of the observed data, while the circles mean those from the simulations. The red, green, and blue points are "IceCube Warm Spots", "IceCube Region" sources, with a distance smaller than 1 • , and "Ice-Cube Region" sources with a distance larger than 1 • . Sources p-value IceCube warm-spot sources 0.038 Sources in IceCube region with a distance ≤ 1 • 0.218 Sources in IceCube region with a distance > 1 • 0.466 Table 1. Pre-trial p-values for three sub-groups of potential neutrino soruces. According to Figure 11 and Table 1, "IceCube Warm Spots" tend to have overlap and simultaneous flaring period in low and high frequency, at a 96.2% significance level, consistent with the histogram in Figure 10. Besides, the pre-trial p-value is smaller for those sources closer to the IceCube alerts, but still with a low significance level and the value larger than 0.2. Given that the significance level is not high, only pre-trail p-values are shown in this test. Time-integrated analysis-Correlations with IceCube 10-year track-like neutrinos The possible neutrino emissions from blazars based on their multi-frequency activity, which is one of the main purposes of this paper, will then be thoroughly investigated. Since the most probable neutrino blazar TXS 0506+056 has a significant flaring lag in infrared with respect to γ-ray (Figure 1), we speculate that the neutrino blazars' low-frequency and high-frequency flaring stage might not be simultaneous. In section 4.2.6, we have built three groups of sources selected from the FIBS sources, that is, one trial group with 32 selected sources with TXS-like multi-frequency flaring behavior and two control groups according to their multi-frequency flaring stages: sources without simultaneous infrared and γ-ray flares and sources with highly simultaneous flares (NFS and CFS, see also Figure 2). Eleven sources with ACC and Fermi flaring activity similar to that of TXS 0506+056 were also selected for comparison. We first check on the difference in the potential association of neutrinos among selected groups of blazars by counting the number of potential neutrino sources in these groups. This provides an alternative way to investigate the correlation between neutrinos and blazars. Table 2 shows the number and fraction of blazars potentially associated with IceCube neutrinos in each group. The total number of sources in each list is represented with a bracket. Compared with the control groups, the fraction of potential neutrino sources for TXS-like sources selected from both FIBS and ACC-Fermi subsample are higher. The relatively high fraction for selected TXS-like sources from the infrared-γ-ray subsample indicates that sources with multi-frequency light curve behaviors similar to TXS 0506+056 are more likely to be related to neutrinos, though the statistic is low given the small number of sources selected. It should be noted that the high fraction for sources selected from the ACC-Fermi sub-sample, on the contrary, is tentative and biased given that the multi-epoch data from the ACC catalog is far from complete. Selected TXS-like CFS NFS Infrared-γ-ray 4(32) 2(62) 32(409) (FIBS) 12.50% 3.23% 7.82% Millimeter-γ-ray 2(12) --(ACC-Fermi) 16.67% -- Table 2. Number and fraction of potential neutrino sources in our selected lists and control groups. The values inside the brackets represent the total number of sources in each selected list and control group. Example light curves of TXSlike sources and CFS samples are illustrated in Figure 1 and 2, and detailed selections of those groups of sources are described in Section 4.2. To test our hypothesis, we made use of IceCube 10 years of public data for point-source searches (Ice-Cube Collaboration et al. 2021) and conducted a timeindependent analysis to study the spatial correlation between the track-like neutrino events and the four groups of blazars. We calculated the number of neutrino events that have at least one blazar located inside their uncertainty regions (N observed ) then used Monte Carlo simulation to estimate how many neutrino events are around our groups of blazars by chance. The simulation was iterated 10 5 times with all blazars' positions randomly scrambled in R.A., and the expected number of neutrinos close to our blazars by chance (N expected ) is the average of those simulated number of events nearby. By subtracting N expected from N observed , we got the excess number of neutrinos associated with our groups of blazars. The excesses of neutrino counts represent the fact that we observed more neutrino around our test blazars' positions than expected. The p-values are estimated based on whether the number of neutrinos around our blazar samples is higher than what is expected to arise by chance. The excess number of neutrinos around the position of four groups of blazars as well as the corresponding p-values with respect to the reconstructed energy of 10year IceCube events are illustrated in Figure 12 and 13. Table 6 shows the measured and expected number of neutrino (N observed and N expected ) around four groups of blazars. The figures suggest that, at energy bins between ∼ 60 to ∼ 150 TeV, generally there is a small neutrino overflow in the vicinity of blazars with non-simultaneous infrared and γ-ray flares (those similar to TXS 0506+056), with an average overflow of ∼ 10 − 15 neutrinos. This overflow is not significant, with a pre-trial p-value around 0.13 − 0.27. We also determined statistical significance using a right-tailed pvalue, which describes the probability of randomly obtaining the number of neutrino events from simulation that is greater than the real observed value (N observed ). The right-tailed p-value for the neutrino overflow around selected TXS-like blazars is around 0.13 − 0.27 as well, with the lowest value of 0.127 that occurs at 150 TeV. On the contrary, there is no neutrino excess for those selected sources at higher energy bins. The excess count of neutrinos around the CFS control group is gradually increasing with the reconstructed energy of the neutrino events. This increment indicates that there might be a correlation between blazars with highly simultaneous infrared and γ-ray flares and extremely high-energy neutrinos. The significance of this possible trend is not high, with a right-tailed p-value of ∼ 0.2. No trial correction was performed for these pre-trail p-values as the correlations are not significant. We note that the excess number of neutrinos fluctuates around zero for the NFS sample and for sources with millimeter and γ-ray flaring activities similar to TXS 0506+056. This implies that the number of neutrinos observed around those blazars is consistent with the number of neutrinos that are randomly nearby only by chance. Even though the highest overflow occurs at ∼ 500 TeV for the NFS sample, it is regarded as random fluctuation given the extremely large trial errors. According to our selection criteria, those selected sources with TXS-like flaring behaviors do have related infrared and γ-ray flares with time lag < 500 days, even though they are not highly simultaneous. Thus, we joined those 32 selected sources with the CFS sample to see if the neutrino excess around those blazars with "Related" flares would be more significant. We did further tests for blazars with only infrared or γ-ray flares, but without both flares in multi-frequency light curves. Figure 14 shows the same analysis as described in the previous paragraph, but with three different groups of blazars: "Related" flares, only flaring in infrared, and only flaring in γ-ray. Apparently, there is no excess of neutrino events around these three groups of blazars. According to our analyses, there might exist a weak trend that the high-energy neutrino alerts might be temporarily correlated with infrared flares (section 5.2). From a population perspective, the neutrino emitters that left "clues" in infrared might be weak sources, even though we have considered the emission from the whole sample of blazars within the containment regions of the alerts, there is no statistically significant neutrino signal. A deeper and more complete infrared monitoring catalog might lead to a much more significant result. If significant associations in infrared are found with higher timing resolution multi-epoch data in the future, it would be complementary to the possible radio correlation found by Plavin et al. (2021), providing a clearer perception of the association between lower-energy wavebands and neutrinos. On the other hand, clear and significant correlation between γ-ray flaring stages and the neutrino arrival times is absent from our analyses, which is consistent with the results of Franckowiak et al. (2020), Righi et al. (2019b), and references therein. The "direct" connection between γ-ray and neutrino becomes ambiguous if the pionic photons cascade down to much lower energies than GeV. Even though the protons and electrons are co-accelerated in or interact with the same photon field, we might not be able to detect related high-energy photon flux and neutrino flux. Not to mention that the γ-ray emissions could originate from a different region than the neutrino production locus (Plavin et al. 2021). 7.2. Result 2: spatial correlation between simultaneous infrared and γ-ray sources and extreme-high-energy neutrino track events A possible correlated trend between blazars with highly simultaneous infrared-γ-ray flares (CFS sample, Figure 2) and extreme-high-energy E 1 PeV tracklike neutrino events are shown in section 6.3. Considering the steep spectral indexes of atmospheric neutrinos, compared to astrophysical neutrinos, the higher the energy of a neutrino event indicates a higher probability of that event being of astrophysical origin. Interestingly, our results also suggest that the sources with significance level > 95% in IceCube point-source searches tend to have simultaneous infrared and γ-ray flaring period (section 6.2). According to all these results, blazars with γ-ray flaring phases correlated with the infrared ones are more likely to be astrophysical neutrino sources. The results imply that a typical single-zone Synchrotron self-Compton (SSC) model might be able to explain the emission for those CFS blazars and suggest that neutrinos might be emitted from the same region as infrared and γ-ray photons in the inner jet. This trend is consistent with prediction in Padovani et al. (2015) and Murase (2017), and references therein. They suggest that blazars might dominate the astrophysical neutrino flux at around PeV, or even at higher energies of 10 − 100 PeV, if the particles (primary electrons and protons) are co-accelerated in the jet. If so, we would expect a correlation among neutrinos, infrared flares, and γ-ray flares, with the assumption of an efficient cascade of TeV photons down to the GeV energy range, which is detectable by Fermi-LAT. The optical depth of the electromagnetic cascade depends on the photon field that drives the photo-hadronic process. In this case, the photon field is produced by the inverse Compton scattering of the jets' low-frequency synchrotron emission. 7.3. Result 3: spatial correlation between TXS-like sources and neutrino tracks at ∼ 60 − 150 TeV In section 6.3, we have shown a small overflow of ∼ 10 track-like neutrino events with energy between ∼ 60 to 150 TeV around blazars with infrared and γ-ray flaring behaviors like TXS 0506+056 (Figure 1). Table 2 additionally suggests that the group of TXS-like blazars contains more potential neutrino sources than the other groups. These indicate that those TXS-like blazars might also contribute to the observed diffuse neutrino flux, especially at the energy of a few tens to hundreds of TeV. From the DCF test (section 6.1), it is known that blazars with infrared flares significantly lag to the γ-ray flares of ∼ 300 days like TXS 0506+056 are not common. The significant lag implies that their low-frequency synchrotron emission might not come from the same site as where the inverse Compton high-frequency photons are produced. In other words, the γ-ray emission might originate from a region closer to the central supermassive black hole, compared to the synchrotron radiation. The external Compton (EC) radiation may dominate the high-energy peaks in the SED of those blazars (instead of SSC radiation), and the external photon field may be highly relevant for the particles in their inner jet. One of the explanations for this significant lag is that when turbulence in the jet cause shock waves that propagate along with the jet, high-frequency emissions occur upstream of the jet before the low-energy synchrotron photons become transparent (Boula et al. 2018;Max-Moerbeck et al. 2014). Alternatively, the temporal evolution of emitting particles would also lead to the lag. Sahakyan & Giommi (2021) explained the lag of optical/UV flares regarding X-ray/γ-ray for a transient blazar, 4FGL J1544.3-0649, with an SSC model considering the acceleration of electrons and cooling of synchrotron photons. In their scenario, the injection of freshly accelerated electrons leads to a bright state in the high-energy band. We could not exclude the possibility that accelerated protons are also injected into the emission zone. Either explanation or the impact of the external photon fields provide efficient conditions for the production of neutrinos in those blazars. Especially, many studies have shown the difficulty to explain both neutrino and electromagnetic emissions from TXS 0506+056 with a single-zone model (see Cerruti (2020) for a review on the hadronic emissions from TXS 0506+056). An optimistic scenario for the production of neutrinos is the existence of an external photon field. The remarkable point is that TXS 0506+056 is found to be a "masquerading BL Lac object", intrinsically an FSRQ, with a hidden broad-line region (BLR) and a radiatively efficient and geometrically thin disc accretion flow ). This hidden BLR might act as an external field for TXS 0506+056 to increase the efficiency of the photo-hadronic process. Indeed, Padovani et al. (2021) suggests that the fraction of masquerading BL Lacs in their sample of 47 Fermi IBLs and HBLs in the vicinity of IceCube high-energy track-like neutrinos is > 24% and possibly as high as 80%. Another potential neutrino blazars, MG3 J225517+2409, reported to be (weakly) associated with ANTARES and IceCube neutrinos (Aublin 2019), is also a masquerading BL Lac . For other "intrinsic BL Lac objects", the existence of a sheath of the jet in spine-sheath model (Tavecchio & Ghisellini 2015) or the complex and relatively broadband photon spectrum produced from radiatively inefficient accretion discs (Righi et al. 2019b) might be able to act as a target field for neutrino production. Furthermore, the SEDs of a complete sample of 104 radio-bright blazars (all with 37 GHz flux density higher than 1 Jy) are found to be better described by an EC model with a dominant infrared external photon field that can originate from dust torus emission or molecular clouds in sphine-sheath geometry (Arsioli & Chang 2018). Those radio-bright blazars are all with VLBI 8 GHz flux density ≥ 150 mJy and thus are also in the samples in Plavin et al. (2021), which shows possible correlation with the neutrinos. We note that Ros et al. (2020) found signs of a spine-sheath structure in the jet of TXS 0506+056. If those TXS-like blazars are supposed to be related to IceCube neutrinos, the natural question that arises then is: where do neutrinos come from? Under the assumption that high-energy and low-energy photons might not originate from the same site, we can envision multiple scenarios where neutrinos might be emitted from (i) the same place as high-energy radiation (ii) the same place as low-energy synchrotron emission (iii) other places neither related to low nor high-frequency emissions. For the first possibility, Xue et al. (2019) proposed a scenario in which neutrinos might be produced from pγ process and possibly pp process in the central region with ambient gas cloud and external photon field from the BLR where the inverse Compton X-ray and γ-ray are radiated. On the other hand, the low-energy synchrotron radiation might originate from the outer region where external photons from BLR or accretion disc are negligible. The second possibility could be a scenario proposed by Plavin et al. (2021) in which neutrinos are emitted from the parsec region in blazar jets where X-ray SSC photons interact with accelerated protons. GeV γ-rays (probably dominated by external Compton radiation) are supposed to be emitted from a different region than the neutrino and synchrotron radiation. While Neronov & Semikoz (2021) proposed an alternative scenario suggesting that the link between radio synchrotron and neutrinos is expected in proton-proton interaction. In the central region of blazars, the interaction between highenergy photons and circum-nuclear medium could produce charged pions that decay into both neutrinos and synchrotron-emitting electrons. To explain the third possibility, a collimated neutron beam is assumed to be produced from interaction between cosmic-ray nuclei and synchrotron photons in the jet Murase et al. 2018). The neutron beam escaping from the blazar emission zone could further interact with the external photon field and produce additional neutrinos further away from the γ-ray and radio emission zones. We could estimate the potential neutrino flux from our selected TXS-like blazars given the slight overflow of ∼ 60 − 150 TeV track-like neutrinos around them. Since the excess is not significant (Figure 12), the flux we estimated here is an upper limit. The expected number of astrophysical neutrinos detected during ∆T at declination δ is given by Upper limits of neutrino flux from TXS-like blazars where φ ν (E ν ) and A ef f (E ν , δ) are the differential neutrino flux and the effective area, respectively (Aartsen et al. 2020). We took the A ef f from IceCube Collaboration et al. (2021) with two energy bins (4.8 < log 10 (E/GeV)< 5 and 5 < log 10 (E/GeV)< 5.2) and averaged the effective areas over the declinations, assuming our selected sources are uniformly distributed on the sky. N is set to 10 as we found an overflow of ∼ 10 track-like neutrinos around those TXS-like blazars. The obtained upper limits of the differential neutrino flux in the two energy bins are φ ν (100TeV) equal to 1.459 × 10 −18 and 1.646 × 10 −18 GeV −1 cm −2 s −1 . Assuming neutrino spectral index of −2, we evaluated the contribution of our selected TXS-like blazars to the 9.5-year astrophysical muon neutrino flux from Abbasi et al. (2021). Figure 15 indicates that those TXS-like blazars contribute with 10% of the diffuse astrophysical neutrinos. These results do not conflict with previous stacking limits. A scenario where blazars with TXS-like multi-frequency activity dominates the blazar contribution to the Ice-Cube astrophysical neutrino flux (at energy range of 60 − 150 TeV) still holds. Possible reasons for the absence of significant correlations Even though we have shown some tiny overflows of neutrino emissions from blazars with certain multifrequency flaring phases, additional analyses are required to improve and refine our findings. Neither the overflow of track-like neutrinos around TXS-like or CFS blazars nor the correlation between infrared flares and IceCube alerts is statistically significant. The low significance could be caused by the limited ability of current neutrino observatories to detect weak neutrino signals with astrophysical origin. We need more sensitive detectors with improved pointing capability and larger volume to further detect those weak neutrino sources. It is known that the diffuse astrophysical neutrino flux detected by IceCube is the aggregate of enormous (probably at least O(50), Brown et al. 2015;Murase et al. 2016)) faint neutrino emitters. Additionally, Palladino et al. (2019) suggested that unresolved BL Lacs with large baryonic loading might be the only sources that dominate the IceCube astrophysical neutrinos with stacking limits accounted for, assuming the neutrino flux is powered by low-luminosity blazars. In their hypothesis, resolved high-luminosity BL Lacs or FSRQs can only contribute to a limiting fraction of the observed astrophysical neutrino flux. Alternatively, we could not exclude the possibility that the majority of blazars are inefficient neutrino emitters at the sub-TeV to sub-PeV range. Blazars are supposed to be more relevant to account for neutrinos with higher energy 10 − 100PeV (as discussed in Section 7.2), which constitute only an extremely small fraction ( 0.5%) of the 10-year track-like neutrinos. On the contrary, majority of the IceCube diffuse flux must have come from other sources. Tidal Disruption Events (TDE) is one of the possible candidates (Reusch et al. 2022). While Stein (2019) constrained the contribution of the TDE to be ≤ 27%, Bartos et al. (2021) suggested that probably more than 50% of IceCube astrophysical flux might come from either AGN or TDE at 90% confidence level. Furthermore, at least a fraction of 10% of the astrophysical neutrinos might come from sources other than AGNs and TDEs with 80% probability . Those neutrinos might originate from unresolved objects or truly diffuse processes, such as intergalactic shocks and dark matter annihilation, which might also dominate the diffuse γ-ray background below 100 GeV. The incompleteness of our multi-frequency and multiepoch data might also bias our results, especially the large blank from 55600 to 56600 in NEOWISE and All-WISEMEP data. This gap in all the light curves of our blazars might lead to a number of sources with TXSlike multi-frequency behaviors not being properly selected because some of the infrared flaring stages fell exactly into the "blank period", causing a lower significance level with insufficient sources. Apart from the gap and large time bin of the WISE data, the ACC and Swift XRT data used in this paper sometimes are taken from target of opportunity observations and are not equally binned. Unfortunately, there are no other complete and equally-binned light curves available. In the future, with a better sensitivity of next-generation neutrino observatories and a more complete multi-frequency coverage for the light curves, we could further confirm or refute the hypothesis that those blazars are indeed efficient neutrino emitters. CONCLUSIONS We have performed a series of analyses to search for potential neutrino emissions from multi-frequency catalogs (ACC, WISE, Swift XRT, and Fermi 4FGL-DR2) and various blazar samples (3HSP, 5BZCAT, WIBRaLS, and 4LAC), investigating possible correlations with Ice-Cube alerts and 10-year track-like events. The associations between IceCube neutrinos and astrophysical sources are thoroughly discussed in the paper by examining the multi-frequency flaring stages of sources within the containment regions of IceCube alerts and blazars with differents types of multi-frequency activity. A time-dependent analysis to investigate the coincidence between the bright stages of multi-frequency sources and the arrival times of the neutrino alerts suggests a possible correlation trend between the infrared flares and the IceCube alerts. The cross-matching between blazars with various infrared-γ-ray flaring behaviors and 10-year track-like neutrino events shows a small overflow of neutrinos around the blazars with flaring phases highly similar to TXS 0506+056 (with a significant lag in infrared) or highly simultaneous (CFS sample). In a nutshell, we have shown that considering the infrared and γray flaring behaviors, the CFS sample and the TXS-like sources might be the most likely neutrino-source candidates among blazars. Our results are consistent with the prediction that blazars might dominate the astrophysical neutrino flux at PeV or higher energies, and consistent with the current limits on the blazars' contribution to the IceCube astrophysical neutrino flux. Moreover, the possible neutrino emitting sites from the TXS-like blazars are discussed in detail, accounting for several models from the literature. An unstable jet or changing accretion rate would lead to the formation of a jet blob, which expands during its propagating outwards along blazars' jet (Chen & Zhang 2021). The radio outburst might result from this inflating blob region, caused by long-term expansion when the synchrotron radiations transitioning from optically thick to thin. This expansion effect does not have to lead to an outburst at higher frequency (e.g., γ-ray), while the charged particles accelerated inside this large-scale blob may account for high energy neutrino emissions. The (plausible) statistical correlation between the radio flaring phases and the arrival of neutrinos (Plavin et al. 2020) might be a clue to this scenario (and maybe also include our infrared result, see Figure 5). Furthermore, the small perturbations in this blob would bring about flares at higher frequencies with a shorter and non-simultaneous time scale, considering the radio outburst related to the inflating blob. This is not in conflict with our results (Figure 12). A model considering a larger scale of accelerated sites, inspired by (i) the statistic correlation found in radio (and/or infrared) and (ii) the multi-frequency activity of plausible neutrino blazars, would be of importance to understand the neutrino emitting mechanism of those blazars. Although the putative correlations investigated along this work are not significant, our results suggest that additional studies − with more complete multi-frequency light curves and more sensitive neutrino detectors − are worth to consider. In the future, with the next generation of neutrino observatories (like TRIDENT, IceCube-Gen2, P-ONE, KM3NeT, and Baikal-GVD: Ye et al. 2022;Aartsen et al. 2021;Agostini et al. 2020;Adrián-Martínez et al. 2016;Baikal-GVD Collaboration et al. 2018), we expect to better investigate the neutrino signatures from those sources, hopefully unveiling neutrino/hadronic processes taking place in blazars. Table 3. Analyses, source lists as well as selection criteria and motivation throughout the paper. Detailed descriptions for multi-frequency data and soure lists are written in section 3.3 and 4. c There are four groups of blazars tested with Icecube 10-year track events. One experimental group of selected TXS-like sources ( Figure 1) from FIBS, two control groups of CFS ( Figure 2) Figure 16 continued. Light curves of selected TXS-like sources. The upper panel is the infrared light curve from WISE multiepoch data, and the middle panel is the Fermi γ-ray light curve analyzed with Fermi Science Tool. The lower panel represents the flaring periods in infrared (blue thick lines) and γ-ray (red thick lines) as well as the simultaneous flaring stages (green lines). Black lines represent the arrival time of the neutrino alerts. Figure 16 continued. Light curves of selected TXS-like sources. The upper panel is the infrared light curve from WISE multiepoch data, and the middle panel is the Fermi γ-ray light curve analyzed with Fermi Science Tool. The lower panel represents the flaring periods in infrared (blue thick lines) and γ-ray (red thick lines) as well as the simultaneous flaring stages (green lines). Black lines represent the arrival time of the neutrino alerts. C. Table 6. Number of neutrino events around blazars in our selected lists and control groups. N measured represents the real number of observed neutrinos, N expected represents the average number of neutrino counts from the simulation with blazars RA randomly scrambled 10000 times, and 1σ is the statistical error at 68% significance level of the simulation.
17,642.8
2022-03-31T00:00:00.000
[ "Physics" ]
Exponential convergence to equilibrium for the d-dimensional East model Kinetically constrained models (KCMs) are interacting particle systems on $Z^d$ with a continuous-time constrained Glauber dynamics, which were introduced by physicists to model the liquid-glass transition. One of the most well-known KCMs is the one-dimensional East model. Its generalization to higher dimension, the d-dimensional East model, is much less understood. Prior to this paper, convergence to equilibrium in the d-dimensional East model was proven to be at least stretched exponential, by Chleboun, Faggionato and Martinelli in 2015. We show that the d-dimensional East model exhibits exponential convergence to equilibrium in all settings for which convergence is possible. Introduction Kinetically constrained models (KCMs) are interacting particle systems on graphs, in which each vertex (or site) of the graph has state (or spin) 0 or 1. Each site tries at rate 1 to update its spin, that is to replace it by 1 with probability p and by 0 with probability 1 − p, but the update is accepted only if a certain constraint is satisfied, the constraint being of the form "there are enough sites with spin zero around this site". KCMs were introduced by physicists to model the liquid-glass transition, which is an important open problem in condensed matter physics (see [16,11]). In addition to their physical interest, they are also mathematically challenging because the presence of the constraints gives them a very different behavior from classical Glauber dynamics and renders most of the usual tools ineffective. A key feature of KCMs is the existence of blocked spin configurations, which makes the large-time behavior of KCMs hard to study, especially their relaxation to equilibrium when starting out of equilibrium. Indeed, worst case analysis does not help and standard coercive inequalities of the log-Sobolev type also fail. Furthermore, the dynamics of KCMs is not attractive, so coupling arguments that have proven very useful for other types of Glauber dynamics are here inefficient. Because of these difficulties, convergence to equilibrium has been proven only in a few models and under particular conditions (see [6,3,7,15]). There is only one model for which exponentially fast relaxation to equilibrium was proven under general conditions (apart from some models on trees that use the same proof): the East model, whose base graph is Z and in which an update is accepted when the site at the left has spin 0. Introduced by physicists in [13], the East model is the most well-understood KCM (see [9] for a review). A natural generalization of the East model to Z d , introduced in [1], is to accept updates at a site x when x − e has spin 0 for some e in the canonical basis of R d . The higher dimension makes this d-dimensional East model much This work has been supported by the ERC Starting Grant 680275 MALIG. harder to study than the unidimensional one, and until now the relaxation to equilibrium was only proved to be at least stretched exponential ( [7]). In this article, we prove that the relaxation to equilibrium in the d-dimensional East dynamics is exponentially fast as soon as the initial configuration is not blocked. This also allowed us to prove that the persistence function, which is the probability that a given site has not yet been updated, decays exponentially with time. Our results, which are the first to hold for a KCM in dimension greater than 1 and for any p, may help to understand further the out-of-equilibrium behavior of the d-dimensional East model. Indeed, such an exponential relaxation result was key to proving "shape theorems" in one-dimensional models in [2,10,4]. This paper is organized as follows: we begin by presenting the notations and stating our results in section 2, then we prove the exponential relaxation to equilibrium in section 3, and finally we show the exponential decay of the persistence function in section 4. Notations and results We fix d ∈ N * . For any Λ ⊂ Z d , the d-dimensional East model (in the following, we will just call it "East model") in Λ is a dynamics on {0, 1} Λ . The elements of Λ will be called sites and the elements of {0, 1} Λ will be called configurations. For any η ∈ {0, 1} Λ , x ∈ Λ, the value of η at x will be called the spin of η at x and denoted by η(x). To define the East dynamics in Λ ⊂ Z d , we begin by fixing p ∈]0, 1[. Informally, the East dynamics can be seen as follows: each site x, independently of all others, waits for a random time with exponential law of mean 1, then tries to update its spin, that is to replace it by 1 with probability p and by 0 with probability 1 − p, but the update is accepted if and only if one of the x − e i is at zero. Then x waits for another random time with exponential law, etc. More rigorously, independently for each x ∈ Λ, we consider a sequence (B x,n ) n∈N * of independent random variables with Bernoulli law of parameter p, and a sequence of times (t x,n ) n∈N * such that, denoting t x,0 = 0, the (t x,n − t x,n−1 ) n∈N * are independent random variables with exponential law of parameter 1, independent from (B x,n ) n∈N * . The dynamics is continuous-time, denoted by (η t ) t∈R + , and evolves as follows. For each x ∈ Λ, n ∈ N * , if there exists i ∈ {1, . . . , d} such that η tx,n (x − e i ) = 0, then the spin at x is replaced by B x,n at time t x,n . We then say there was an update at x at time t x,n , or that x was updated at time t x,n . (If there are sites x − e i , x ∈ Λ, i ∈ {1, . . . , d} that are not in Λ, we need to fix the state of their spins in order to run the dynamics.) One can use the arguments in part 4.3 of [17] to see that this dynamics is well-defined. For any η ∈ {0, 1} Λ , we denote the law of the dynamics starting from the configuration η by P η , and the associated expectation by E η . If the initial configuration follows a law ν on {0, 1} Λ , the law and expectation of the dynamics will be respectively denoted by P ν and E ν . In the remainder of this work, we will always consider the dynamics on Z d unless stated otherwise. For any t ≥ 0 and Λ ⊂ Z d , we denote F t,Λ = σ(t x,n , B x,n , x ∈ Λ, t x,n ≤ t) the σ-algebra of the exponential times and Bernoulli variables in the domain Λ between time 0 and time t. We notice that if η 0 is deterministic, for any x ∈ Z d , η t (x) depends only on the t x,n , B x,n with t x,n ≤ t and on the state of sites "below" x: x − e 1 , . . . , x − e d , which in turn depends only on the η 0 (x − e i ), t x−e i ,n , B x−e i ,n with t x−e i ,n ≤ t and on the state of the sites "below" the x − e i , etc. Therefore η t (x) depends only on η 0 and on the t y,n , B y,n with t y,n ≤ t and y ∈ x We will call µ the product Bernoulli(p) measure on the configuration space {0, 1} Λ . The expectation with respect to µ of a function f : {0, 1} Λ → R, if it exists, will be denoted µ(f ). µ is the equilibrium measure of the dynamics, which can be seen using reversibility, since the detailed balance is satisfied. We say that a measure ν on {0, 1} Z d satisfies condition (C) when This is the minimal condition on η for which to expect convergence to equilibrium, since if the initial configuration contains only ones, there can be no updates, hence the dynamics is blocked. • the product Bernoulli(p ′ ) measures with p ′ ∈ [0, 1[, which are particularly relevant for physicists (see [14]). We can now state the main result of the paper, the convergence of the dynamics to equilibrium: Remark 3. With only minor modifications in the proof, one can also show exponential convergence of the quantity Another quantity of interest is the persistence function. If ν is the law of the initial configuration and x ∈ Z d , the corresponding persistence function can be defined as F ν,x (t) = P ν (τ x > t) for any t ≥ 0, where τ x is the first time there is an update at x. The persistence function is a "measure of the mobility of the system": the more the spin at x can change, the faster it will decrease. Theorem 2 allows to prove exponential decay of the persistence function: Remark 5. The decay of the persistence function can not be faster than exponential, because τ x ≥ t x,1 , thus F ν,x (t) ≥ P ν (t x,1 ≥ t) = e −t . Moreover, since the spin of a site x will remain in its initial state until τ x , the convergence to equilibrium can not be faster than exponential. Consequently, the exponential speed is the actual speed. Remark 6. In theorem 2 and corollary 4, one could replace Proof of theorem 2 The proof of the theorem can be divided in three steps. Firstly, we use a novel argument to find a site of (−N) d at distance O(t) from the origin that remains at zero for a total time Ω(t) between time 0 and time t (section 3.1). Afterwards, we use sequentially a result of [7] to prove that the origin also stays at zero for a time Ω(t) (section 3.2). Finally, we end the proof of the theorem with the help of a formula derived in [7]. 3.1. Finding a site that stays at zero for a time Ω(t). For any t ≥ 0 and κ > 0, =0} ds the time that x spends at zero between time 0 and time t. We also define G = {∃x ∈ D | T t (x) ≥ 1−p 4 t}. We then have Lemma 7. For any κ > 0, there exist constants Proof. We set κ > 0. It is enough to prove the lemma for t ≥ 1/(2dκ − κ), so we fix t ≥ 1/(2dκ − κ). Let η ∈ {0, 1} Z d with x ∈ {−⌊κt⌋, . . . , 0} d such that η(x) = 0 be the initial configuration. We define E = {y ∈ D | there was an update at y in the time interval [0, t/2]}. Moreover, an oriented path will be a sequence of sites (x (1) , . . . , x (n) ) with n ∈ N * such that for any k ∈ {1, . . . , n − 1}, there exists i ∈ {1, . . . , d} with The proof of lemma 7 relies on the following auxiliary lemma, whose proof will be postponed until after the proof of lemma 7: This auxiliary lemma implies that we either get a site satisfying G, or a path of Ω(t) sites that were updated before time t/2. In the latter case, the orientation of the model allows us to use a conditioning which yields that the probabilty that none of the sites of the path stays at zero for a time 1−p 4 t is the product of the probabilities for each of the sites not to stay at zero for a time 1−p 4 t, and we can prove that this probabilty is lower bounded. Let us prove lemma 7 by writing down the argument. For any k ∈ {0, . . . , d⌊κ ′ t⌋}, we define the "diagonal hyperplane" For any k ∈ {d⌊κt⌋, . . . , ⌊κ ′ t⌋}, we define a σ-algebra F k as follows: For any k ′ ∈ {d⌊κt⌋, . . . , ⌊κ ′ t⌋} with k ′ > k, one can see that everything that happens at the sites in H k ′ between times 0 and t is F k -measurable, thus U k ′ and G c k ′ are F k -measurable. Moreover, for any x ∈ H k , the spins of the x − e i , i ∈ {1, . . . , d} in the time interval [0, t/2] are F k -measurable and the t x,n ≤ t/2 are also F k -measurable. Therefore the event {there was an update at x between time 0 and time t/2} is F k -measurable, hence U k is F k -measurable. Consequently, Therefore, if we can find a constant c ′ , which is lemma 7. Consequently, we only need to prove (1). Let k ∈ {d⌊κt⌋, . . . , ⌊κ ′ t⌋}. For any x ∈ H k , if the state of the x − e i , i ∈ {1, . . . , d} between time 0 and time t is known, and if the t x,n ≤ t/2 are also known, the state of x between time 0 and time t depends only on the t/2 < t x,n ≤ t and on the B x,n such that t x,n ≤ t. Therefore, conditionnally on F k , the state of x between time 0 and time t depends only on {t/2 < t x,n ≤ t} ∪ {B x,n | t x,n ≤ t}. Moreover, these sets for x ∈ H k are mutually independent conditionnally on F k , hence the states of the x ∈ H k between time 0 and time t are mutually independent conditionnally on F k , which implies Moreover, we saw that the events {x ∈ E} for x ∈ H k are F k -measurable, therefore we can write In addition, for x ∈ H k ∩ E, we have the following (in the second inequality we use the Markov inequality): Furthermore, for s ∈ [t/2, t], since x ∈ H k ∩ E, conditionnally on F k we know that there was an update at x before time s, but not the associated Bernoulli variable, hence P η (η s (x) = 1|F k ) = p. This implies Moreover, Proof of lemma 8. Let us suppose that no site of D stays at zero during the time interval [0, t/2]. Then E contains x, because x ∈ D and if there was no update at x between time 0 and time t/2, the spin of x would stay during this whole time interval at its initial state of 0, which does not happen by assumption. We are going to show that if we have an oriented path in E starting from x that does not reach D \ D ′ , we can add a site at its end in a way we still have an oriented path in E. This is enough, because from the path composed only of x we can do at most d⌊κ ′ t⌋ steps before reaching D \ D ′ . Thus we consider an oriented path in E starting from x that does not reach D \ D ′ . Let us call y its last site; we have y ∈ D ′ . Since y ∈ E, y was updated between time 0 and time t/2. This implies that one of the y − e i , i ∈ {1, . . . , d}, that we may call y ′ , was at zero at the moment of the update. Moreover, y ∈ D ′ , hence y ′ ∈ D. There are two possibilities: • Either the spin of y ′ was not zero in the initial configuration. Then there was an update at y ′ before the update at y, hence before time t/2, so since y ′ ∈ D, y ′ ∈ E. • Or the spin at y ′ was zero in the initial configuration. In this case, if there was no update at y ′ before time t/2, y ′ stayed at 0 during the whole time interval [0, t/2]. However y ′ ∈ D, so this is impossible by assumption. Therefore there was an update at y ′ before time t/2, which implies y ′ ∈ E. Therefore y ′ ∈ E in both cases, which allows to add a site to the path and ends the proof of lemma 8. Proof of corollary 4 This proof is inspired from the proof of the lemma A.3 of [8].
4,019.4
2019-01-01T00:00:00.000
[ "Physics" ]
The correlation between students’ grammar mastery and writing ability in descriptive text of the first grade students in MAN 1 Bandar Lampung The objectives of the study were to find out the correlation between students’ grammar mastery and writing ability and to find out what aspect of writing has the most correlation on students’ grammar mastery. This research used quantitative approach. The subject of the research was the first grade students of MAN 1 Bandar Lampung. It consisted of thirteen classes; the total number of population was 439 students. By using sample random sampling, the sample of this research was 40 students taken from twelve classes (3 and 4 students in each class). A set of the grammar test in the form of multiple choices was used to measure students’ grammar mastery and writing test was used to measure students’ writing ability. Pearson Product Moment Correlation in SPSS 25.0 for windows was applied in this research to analyze the data. The result indicated that there was a correlation between students’ grammar mastery and their writing ability since the significant value was 0.730 resided between 0.600 - 0.800, which means the strength is high correlation. The data indicated that Sig. (2-tailed) = 0.00 which was lower than 0.05. This suggests that a student who has high scores in grammar mastery, she or he also get a good score in writing. It was assumed that language use was the aspect of writing has the most correlation on students’ grammar mastery because; it was found that during the writing test, the students showed various errors, such as using incorrect tenses and unstructured sentences. Thus, it was suggested for teacher should explain more about grammar before teaching writing. From the research findings, it can be concluded that there was a positive correlation between students’ grammar mastery and writing ability in descriptive text and language use was the aspect of writing that most correlated with grammar mastery. I. INTRODUCTION As a productive skill, writing is considered to be the most complex language skill to be learnt. As stated by Richards and Renandya (2002) on their book that writing is the most difficult skill for second language students to be mastered. It is because writers are required to have a lot of ideas and concentration in constructing writing. However, the difficulty is not only in generating and managing ideas, but also changing these ideas into readable text (Alameddine & Mirza, 2016). In writing, the writers do not only need ideas but also skills to write their ideas into written form so that the reader can understand what is meant by the writer. Even tough writing is difficult to be mastered, students should have writing skill because writing is an important role in learning process to deliver their ideas. According to Walsh (2010) in (Klimova, 2012), writing is an important skill for students to learn because it is used in education and the workplace extensively. Writing can be a means for everyone to communicate and inform the information to others as a writer and reader. By mastering written English, students can communicate with people around the world. In short, students need understanding and mastery in writing skill. In addition, writing is considered as a cognitive skill which combines knowledge and understanding with practice in language use. Language skills and language components are related to each other, so they cannot be separated. Therefore, we can find the language components in language skills (Andini et al., 2017). In writing, the used language components are grammar, vocabulary, pronunciation and punctuation. Furthermore, this research indicates that writing ability is as language skill and grammar mastery is as language component. It means that writing and grammar are related to each other. According to (Brown, 1994), grammar is a system of rules that governs the conventional arrangement and relationship of words in a sentence. Hence, it is useful for students to know how to combine words to write meaningful sentences. Furthermore, the importance of using grammar in writing is stated by Frodesen and Eyring (2000) in (Fatemi, 2008), the focus on form (grammar), meaning and use in composition can help students to develop and enrich the linguistic resources needed to express ideas effectively. From the statement, it can be said grammar can help students to increase their writing in delivering ideas. Furthermore, as one of the aspects that affect students' writing process to express their ideas, grammar plays an important role in order to form words into sentences appropriately. A research conducted by Adhiyatma and Jamiluddin (2015), Putri et al. (2016), Etfita (2019), Fatemi (2008, and Septiani (2014) found that the correlation between grammar mastery and writing ability were significant. It means, students' writing ability can be affected by their grammar mastery. Although several studies have revealed a positive correlation between grammar and writing, however more specific research on the grammar aspect needs to be conducted. Therefore, the researcher intends to analyze the correlation between aspects of writing and the use of language, especially grammar, namely mastery of grammar forms in descriptive text. Additionally, based on the first grade syllabus of high school, the materials in learning grammar are pronouns, tenses, adjectives, verbs, nouns and adverbs. These materials are required to support students' understanding in writing. Furthermore, according to basic competences of curriculum 2013, students are expected to be able to write some texts in learning writing such as descriptive, recount and narrative texts. In this research, the researcher focuses on descriptive text. Descriptive text is a text which has function to describe an object such as a place, person, animal and thing. According to Gerot and Wignell (1994), descriptive text is a type of text that aims to provide information by describing particular things. This type of text will be used in this research because it mostly use of grammar such as simple present tense, noun phrase, action verb, to be present, and to be past. Accordingly the background above, the writer is motivated to conduct an investigation on the correlation between grammar mastery and writing ability especially in descriptive text. The writer is intended to research about: "The Correlation between Students' Grammar Mastery and Writing Ability in Descriptive Text of The First Grade Students in MAN 1 Bandar Lampung" II. METHODS This research used quantitative approach. The researcher used Correlation Research Design to answer the research question. There is no treatment in this research. By using simple random sampling, the researcher chose 40 students taken from twelve classes (3 and 4 students in each class) at MAN 1 Bandar Lampung. This research used grammar and writing test to collect the data. In grammar test, the researcher used multiple choices test. There were 30 questions consisted of simple present tense, noun phrase, linking verb, to be present, and to be past. For writing test, the students were asked to describe about someone their love. In assessing students' writing ability, the researcher used the assessment suggested by Jacobs et al. (1981) such as content, organization, vocabulary, language use, and mechanics. Then, the scores of the students' grammar and writing scores were analyzed by using Pearson Product Moment to know the correlation. III. RESULTS AND DISCUSSIONS Result This chapter describes a general description of data gained by the researcher during the research. The data were collected from the result of the students' grammar and writing test. The validity and reliability test had been conducted before the researcher administered the test. Before answering the research question of this study, the data description of this research has been obtained. The Result of Grammar Test In grammar test, the mean of students' grammar mastery (X) was 72.2. Among 40 students, the highest score of grammar test was 83 and it was achieved by three students. The lowest score was 60 and it was achieved by two students. There were four students who got 63, five students got 67, six students got 70, ten students got 73, six students got 77, and four students got 80. The Result of Writing Test For the students' writing test, the researcher calculated the students' scores to get the final score (mean score of each student was based on the score from 2 raters which were the researcher and the teacher). The mean of students' writing ability (Y) was 75.7. The highest score of writing test was 85 and the lowest score was 66. There were ten students who got 66 to 70, eight students got 71 to 75, sixteen students got 76 to 80, and six students got 81 to 85. The Correlation between Students' Grammar Mastery and Writing Ability As the data are shown below, the researcher got the result of each variable. This is the result of the correlation between students' grammar mastery and their writing ability. ,000 N 40 40 **. Correlation is significant at the 0.01 level (2-tailed). The table above showed that the correlation coefficient value of r = 0.730, N. Sig = 0.000 which means lower than level of significant 0.05. It indicated that there was a significant correlation between the two variables. This research has a positive correlation because the variables had the same moderate score. If the subjects had low grammar scores, they also had low scores in writing descriptive. Conversely, if they had high grammar scores, they also had high scores in writing descriptive text. From the r number (0.730), the researcher could use it to determine the strength of the correlation between the two variables. The number of 0.730 is in 0,600 -0,800, which means that the interpretation correlation between the two variables is high. Hypothesis Testing This researcher was done in collecting data and has got the result of the correlation. To answer the research problem, the writer had to measure whether the hypothesis was rejected or not. The writer has two hypotheses in this research, those are: 1. Null Hypothesis (H0): There is no correlation between students' grammar mastery and their writing ability. 2. Alternative Hypothesis (H1): There is correlation between students' grammar mastery and their writing ability. To know the answer for the last hypothesis, the researcher used SPSS hypothesis testing based on the N. Sig (number of significance). From the result of correlation above ( The result of the data showed that the significance was 0.000 (Level of significance 0.01 and 2 tailed) which clarified that H0 was rejected. The hypothesis testing concluded that N. Sig <5%, where H1 was accepted. It means that both students' grammar mastery and their writing ability in descriptive text are correlated. Thus, it can be concluded that "There is a correlation between students' grammar mastery and writing ability in descriptive text", answered the research problem. The Result of Writing Aspects To answer the second research question that was what aspects of writing have the most correlation on students' grammar mastery, the mean of each aspect of writing was calculated in favor of getting the result. It is used to see the correlation of writing aspects and students' grammar mastery. The data are below: ,000 N 40 40 **. Correlation is significant at the 0.01 level (2-tailed). From table 3.2, it can be assumed that from five writing aspects which are content, organization, vocabulary, language use, and mechanics. Language use was the aspect that most correlated with the students' grammar mastery by having the number of significant 0.603. It can be concluded that language use was the aspect of writing that most correlated with students' grammar mastery. In conclusion, the answer to second research question was language use. Discussion In this research, the researcher had collected the data by using two instruments. The first was grammar test that given to all students as participants in this research. They asked to answer the questions that given by the researcher through Google Form. This test used to know the students' grammar mastery. The second instrument was writing descriptive text. This test was conducted after the grammar test. In this discussion the researcher intended to present the result from the analysis of the findings. The analysis has been accomplished in order to answer the research problem. Moreover, it was found that the coefficient correlation between students' grammar mastery and their writing ability was 0.730. It can be concluded that there was positive correlation both two variables in high correlation. As the researcher explained before, if students had high grammar mastery, it would have an impact or influence on their writing especially in descriptive text. Moreover, students could also fail in writing if their grammar mastery is poor. In short, the increase in students' grammar was followed by the improvement of writing skills. Viet (1989) says that the knowledge of the structure can also be a tool to analyze our writing. When students master in understanding grammar, they also have a good ability in writing, because they know how to arrange the sentences into good texts that are understandable and meaningful. This factor implies that the students' activity and frequency in mastery of grammar provide a useful contribution to increase their achievement in writing descriptive texts. Furthermore, Istiqomah (2014) states that as one of the components in writing, grammar plays an important role in writing and clearly influences students' writing because the better students' grammar mastery, the better their writing. Some people may be good at writing descriptive text with lots of good ideas to express but if they don't have knowledge of grammar, they will have difficulty communicating those ideas to others. Most studies have revealed a positive correlation between grammar and students' writing ability. A research was conducted by Putri (2018), Widya and Wahyuni (2018), and Putri et al. (2016). The result of those studies showed that the students' grammar mastery affected their enhancement in writing English. It means that there is a significant relationship between grammar mastery and writing ability. However, there is difference those studies and this research. The researcher investigated which aspect was the most correlated among aspects of writing. In addition, the researcher did the research via online. There are some criteria in writing such as content, organization, vocabulary, language use, and mechanics. Language use is the most important aspect in writing which has the highest correlation with students' grammar mastery by having a significance number was 0.603. Language use is a communicative meaning of language. It can be connected to usage, which relates to the rules for creating language and the structures we use to be meaningful (Klimova, 2011). According to Jacobs (1981) language use refers to the correct use of grammatical forms and syntactic patterns. It can be seen from the well-formed sentence construction because this can also affect the comprehensibility of a text. Furthermore, Klimova (2011) indicates that in the aspects of writing, particularly the language use, conforms to the needs of an ESL learner. Therefore, the most frequent errors for second language learners are the use of articles, word order, tenses and prepositions. However, it is common for students feel uneasy when they write because they have to express their ideas in foreign language. Although they have some problems in writing, with continuous practice, many mistakes can be avoided and can improve their writing skills as well (Lodge, 2012). From the explanation above, it can be concluded that language use and grammar are aspects that students must be mastered in writing because the two variables have the highest correlation. By using proper language use or grammar in writing, it will be created good communication. IV. CONCLUSIONS AND SUGGESTIONS Conclusion According to the data analysis and discussion, the researcher concludes that there is a positive correlation between students' grammar mastery and writing ability with the value of Pearson correlation is 0.703, Sig. = 0.000 which means lower than level of significant 0.05. It indicated that there was a significant correlation between students' grammar mastery and writing ability. It means that when the students have good grammar mastery, they will have good achievement in writing. Meanwhile, students who get bad score in grammar, they will get poor in writing as well. Therefore, language use is the aspect of writing with the value of correlation is 0.603, Sig. = 0.000 which influences most of the students because the students should know how to arrange the sentences into an understandable and meaningful text. Suggestion After conducting the research, the author recommended several suggestions for both English teacher and further research. Firstly, the teacher should explain more about grammar before teaching about writing. The way they explain, it determines how the students' understand about the grammar. The teachers have important role in helping and facilitating the students to learn grammar well, so the students' writing ability will increase. Besides, the teacher should explain more to the students that there are some criteria to asses writing subject in order to the students are not only focus on grammatical aspects in writing but also learn more how to organize the text well by considering the content, organization, vocabulary, and also mechanics. Furthermore, for the future researcher that not only grammar can influence students' writing, vocabulary mastery can also affect students to express their ideas in writing. It can be seen from the results of the correlation between grammar and vocabulary. Therefore, the researcher suggests other researchers to conduct other studies on different variables.
4,018
2021-01-01T00:00:00.000
[ "Education", "Linguistics" ]
TMU System for SLAM-2018 We introduce the TMU systems for the second language acquisition modeling shared task 2018 (Settles et al., 2018). To model learner error patterns, it is necessary to maintain a considerable amount of information regarding the type of exercises learners have been learning in the past and the manner in which they answered them. Tracking an enormous learner’s learning history and their correct and mistaken answers is essential to predict the learner’s future mistakes. Therefore, we propose a model which tracks the learner’s learning history efficiently. Our systems ranked fourth in the English and Spanish subtasks, and fifth in the French subtask. Introduction The second language acquisition modeling (SLAM) is an interesting research topic in the fields of psychology, linguistics, and pedagogy as well as engineering. Popular language learning applications such as Duolingo accumulate learning data of language learners on a large-scale; thus, there has been an increasing interest for SLAM using machine learning using such data. In this study on SLAM, we aim to clarify both: (1) the inherent nature of second language learning, and (2) effective machine learning/natural language processing (ML/NLP) engineering strategies to build personalized adaptive learning systems. In order to predict the learner's future mistakes, it is important to track a huge history of what and how exercises were solved by that learner and be able to model it. Therefore, we propose a model that can efficiently track a learner's learning history. (Piech et al., 2015;Khajah et al., 2014Khajah et al., , 2016 Figure 1: An exercise example. Given exercise is a "correct" input. Outputs are "1" each time a learner makes a mistake 2018 Duolingo Shared Task on SLAM We used data from Duolingo in this shared task. Duolingo is the most popular language-learning online application. Learners solve the exercises and this shared task use only 3 type of exercises. Exercise (a) is a reverse translate item, where learners translate written prompt from the language they know into the language they are learning. Exercise (b) is a reverse tap item, where learners construct an answer given a set of words and distractors in the second language. Exercise (c) is a listen item, where learners listen and transcribe an utterance in the second language. In this shared task, There are 3 exercise data of the following groups of second language learners: • English learners (who already speak Spanish) • Spanish learners (who already speak English) • French learners (who already speak English) The Duolingo data set, which contains more than 2 million annotated words, is created from the answers submitted by more than 6,000 learners during their first 30 days. In the related exercises, learners answer questions related to the second language they are learning; thus, they inevitably make various mistakes during the course. In this task, we predict mistakes on word level given an exercise. Figure 1 is an exercise example. Given a "correct" exercise as input a system has to predict labels as output. In general, most tokens are perfect matches; however, the remainder of the tokens are either missing or spelled incorrectly (ignoring capitalization, punctuation, and accents). The former is assigned the label "0" (OK), while the latter is assigned the label "1" (Mistake). TMU System To track a lot of learner's histories, our proposed TMU system has two components: (1) a base component that predicts whether a learner has made a mistake for the given word in an exercise (Fig. 2, Prediction Bi-LSTM) and (2) a component that tracks a specific learner's information regarding the learned exercises and the words that he or she might have mistaken (Fig. 2, History LSTM). It is expected to track huge history of the learned exercise by inputting the hidden state of the Prediction model to the History LSTM. In prediction, we receive exercise as input and make predictions on word-level. Using Bi-LSTM for sequence labeling on exercise level, e.g., information as POS tags or dependency edge labels, allows us to share information within each exercise for better prediction. We perform training by feeding input exercises arranged in a chronologi-cal order for each learner. Table 1 lists all the features used by our system. We use features (1-7) included in the dataset distributed by the task organizers as well as the tracking history (8) (Section 3.3) and labels for language identification (9). We trained a single model with three languages, including English, Spanish, and French; in addition, we used the language identification feature to distinguish them. Features There are three types of inputs for the Bi-LSTM. The first input includes word-level features that indicate information changing for each word in an exercise. In particular, word surface and POS are used as word-level features. The second input consists of exercise-level features. In particular, days, session, format, time, and history are used as exercise-level features. The third input includes learner-level features. For this, learner and language features are extracted for each learner. Prediction Bidirectional LSTM We used bidirectional LSTM (Bi-LSTM) to predict whether a learner has mistaken each word in an exercise. The k-th word and POS of the j-th exercise of the i-th learner are converted into e i (j,k) Embeddings Description English, Spanish, French and p i (j,k) distributed representations, respectively. Further, the session and format of the j-th exercise of the i-th learner are converted into s i j and f i j distributed representations, respectively. Days and time are represented as b i j and t i j , respectively. User and language are converted into u i and l i distributed representations, respectively. History is the last hidden state c i (j−1,M ) of the History LSTM, which will be described later (Section 3.3). The inputs of the Bi-LSTM are given as is the concatenation of all features and N is the length of the j-th exercise. x i (j,k) is converted into the forward hidden state is fed into the extra hidden layer: whereĥ i (j,k) ∈ R dĥ×1 is an extra hidden layer output, W h ∈ R dĥ×d h is a weighting matrix, and b h ∈ R dĥ×1 is a bias. The extra hidden layer outputĥ i (j,k) is linearly transformed using the output layer as follows and the probability distribution p i (j,k) ∈ R t×1 of the true/false tag is acquired using the softmax function, where t is the size of the tag, which is set to 2 in our study. where Wĥ ∈ R t×dĥ is a weighting matrix and bĥ ∈ R t×1 is a bias. History LSTM As previously mentioned, to correctly predict each learner's mistakes, it is important to consider not only the history of learned exercises, but also the learner's answers to exercises. Thus, the History LSTM tracks all previous information regarding the learned exercises and how they were answered by each learner. For each j-th exercise, o i (j,1) , o i (j,2) , · · · , o i (j,N ) is given as an input to the j-th History LSTM, where o i (j,k) = [h i (j,k) ; g i (j,k) ]. h i (j,k) (Section 3.2) is considered as information about the j-th exercise of the i-th learner and g i (j,k) ∈ R 1×1 is the gold answer of the i-th learner to the j-th exercise. In addition, the first hidden state and cell memory of the j-th History LSTM is initialized with the last hidden state and cell memory of the previous j-1-th History LSTM. The hidden state c i (j,1) is created from o i (j,1) using the LSTM for the next step of the Prediction Bi-LSTM. Training The objective function is defined as follows: where D is the training data and θ represents model parameters. We use Backpropagation Through Time (BPTT) for training. In general, low-frequency words are replaced by unk word to learn unk vector. However, in our study, unknown words appear not because they have low-frequency, but because they have not been learned yet. Hence, we use words that appear for the first time in an exercise to be replaced by unk word to learn unk vector. In addition, we use words without unk replacement to track the history for the History LSTM. The final loss is calculated as follows: where αL unk θ is calculated by replacing the word appearing for the first time with unk, while (1 − α)L orig θ is calculated using this word itself. In particular, α expresses the degree of emphasis placed on unk and a learned word. For example, when a word "Japanese" appears for the first time, then: Original exercise: I am Japanese Replaced by unk: I am <unk> If the unk does not exist in any exercise, L θ has the same value as L orig θ . Testing During our test, predictions were made on exercises of the test data arranged in chronological order for each learner. We update History LSTM using output and hidden state of Prediction Bi-LSTM. Test data does not have gold answers unlike training data. Hence, each system used its own converted probability outputs of the Prediction Bi-LSTM component with arg max as gold answers. In addition, we performed ensemble predictions. The parameters of ensemble models are initialized with different values. As the final prediction result, we used the average of the probability outputs of each Prediction Bi-LSTM. Each system used its own converted probability outputs of the Prediction Bi-LSTM component as gold answers. Table 2 shows the number of exercises for train, dev and test data for each language. The hyper parameters of our model are listed in Table 3. All (4) 0.01 Dev, (Section 3.5) 3,000 Ensemble, (Section 3.5) 10 words that appeared in the training data were included in the vocabulary. Preliminary experiments showed that the AUROC of the one model trained on data of three languages was higher than those models trained for each language. Therefore, we trained a single model with three language tracks, including English, Spanish and French. Especially, AUROC increased for low-resource French language. Each model of the ensemble uses different dev and training sets randomly sampled from the data. In particular, since we needed to evaluate the learning results of Future Days of each learner, we combined the provided official training and dev sets and arranged exercises in chronological order of Days for each learner. Next, we randomly sampled exercises from final learning exercises of learners to create a dev set and the remaining data were used as training data. Table 4 lists the results of SLAM for English learners, Spanish learners, and French learners. The systems are ranked by their AUROC. The TMU system ranked fourth in English and Spanish subtasks, while it ranked fifth in the French subtask. Analysis of Tracking History In order to confirm the importance of history tracking, we compared the model that considers history (W/ History Model) with the model that Table 3. Table 5 lists our evaluation results 1 . It can be observed that the AUROC of prediction of the W/ History Model case is considerably higher than that of the W/O History Model. As we expected, it is important to consider what learner have learned in the past and how they responded to it in order to improve future predictions. Conclusion In this study, we described the TMU system for the 2018 SLAM Shared Task. Our system is based on RNN; It has two components: (1) Bi-LSTM for predicting learners' error and (2) LSTM for tracking learners' learning history. In this work, we have not used any languagespecific information. As future work, we plan to exploit additional data for each language, such as pre-trained word representations, ngrams, and character-based features. Additionally, we hope to incorporate word difficulty features (Kajiwara and Komachi, 2018). In particular, the more complex a word is, the more difficult it likely is to be learned.
2,760.8
2018-06-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Synchronous Mixing Architecture for Digital Bandwidth Interleaving Sampling System : By using a mixer to down-convert the high frequency components of a signal, digital bandwidth interleaving (DBI) technology can simultaneously increase the sampling rate and bandwidth of the sampling system, compared to the time-interleaved and hybrid filter bank. However, the software and hardware of the classical architecture are too complicated, which also leads to poor performance. In particular, the pilot tone used to synchronize the analog and digital local oscillators (LO) of mixers intermodulates with the high frequency components of the signal, resulting in larger spurs. This paper proposes a synchronous mixing architecture for the DBI system, where the LO of the analog mixer is synchronized with the sampling clock of the analog-to-digital converter. Its hardware and software are simplified—the pilot tone used to synchronize the LOs can also be removed. An evaluation platform with a sampling rate of 250 MSPS is implemented to illustrate the performance of the new architecture. The result shows that the spurious free dynamic range (SFDR) of the new architecture is more than 20 dB higher than the classical one in a high frequency range. The rise time of a step signal of the new architecture is 0.578 ± 0.070 ns faster than the classical one with the same bandwidth (90 MHz). Introduction The demand for sampling systems with a higher bandwidth and sample rate has dramatically increased for such fields as software-defined radio, coherent optical communication and time domain measurement [1][2][3]. However, due to the limitation of semiconductor technology, it is difficult to increase the speed of an analog-to-digital converter (ADC) above a gigahertz, which limits the sample rate of the sampling system [4,5]. The concept of time-interleaved ADCs was first proposed for increasing the speed of the sampling systems [6]. In the time-interleaved systems, ADCs are connected in parallel at the front end while sampling at different phases of the same clock. The digital multiplexer driven by ADCs sequentially selects the output of each channel to obtain the full speed code. The time-interleaved sampling system is extremely sensitive to the mismatches between the sub-ADCs. The signal-to-noise ratio (SNR) and spurious free dynamic range (SFDR) are not as good as a system that is built up with a single ADC [7,8]. The mismatches include offset, gain and sample time skew; some papers analyze the mismatch effects in a time-interleaved system and introduce some compensation methods, divided into foreground and background calibrations [9][10][11]. The normal operation of the ADC is interrupted during the foreground calibration, which is usually performed when the system is powered on. The background calibration does not affect the normal operation of the ADC. Another popular architecture for parallelizing ADCs is proposed in [12], which is called hybrid filter bank (HFB). It uses an analog analysis filter bank to replace the input power divider (or driver amplifier) in a time-interleaved system. It reconstructs the signal through a digital filter bank instead of a multiplexer. The analog analysis filter bank allocates different frequency bands to each sub-ADC, and attenuates the aliasing caused by the mismatch. Compared with the time-interleaved system, it greatly reduces the sensitivity of performance to mismatches between converters. The two methods introduced above both increase the system sampling rate, but cannot increase the bandwidth. Its upper limit is still determined by a single converter. The ADCs in the time-interleaved and hybrid filter bank architectures need to operate in the second Nyquist zone or even higher. Most gigahertz converters do not have this capability. Therefore, a series of frequency-interleaved architectures that use mixers to down-convert the input signal have been proposed [3,[13][14][15][16][17][18][19]. They can be divided into two categories: one is similar to time-interleaved ADCs, where the input signal is first distributed to each channel by a power divider and then down-converted by a mixer, which is generally a complex mixer [13,15,17,18]; the other is similar to the HFB system, where the analog analysis filter bank allocates the input signal to the sub-channel, followed by down-conversion. Due to the high operating frequency, the analog filter bank is passive, and the mixer is also a real mixer with a simple structure [3,14,16,19]. The digital bandwidth interleaving (DBI) system that this article focuses on belongs to the second architecture. Some high-speed digital oscilloscopes use DBI technology, which connects multiple acquisition channels in parallel while increasing both bandwidth and sampling rate [3,14,16]. Some problems still exist in the classical DBI system. First of all, the digital reconstruction process of the signal is too complicated, consumes too many computing resources, and cannot be completed in real time [3]. Secondly, it requires a pilot tone insertion system to establish the local oscillators (LO) synchronization between analog and digital mixers [20]. Pilot tone insertion requires analog circuits and digital LO synthesizers to work together, which increases hardware complexity and software calculations, and also leads to a decrease in the performance of the sampling system. Last but not least, the quality of the signal reconstructed by the DBI technology is not good, and there are many spurious components, which is worse than the time-interleaved system after calibration [21]. In this article, a new architecture called synchronous mixing for the DBI system is introduced. Compared with the classical architecture, it does not require digital mixing when reconstructing the signal, so there is no need to insert a pilot tone. In the synchronous mixing architecture, the signal reconstruction process is similar to the HFB system, and only a set of digital synthesis filters is needed, which can remove digital anti-aliasing filters in the classical DBI system. Although the software and hardware are greatly simplified, the reconstructed signal of the synchronous mixing architecture has a higher SFDR than the classical one because there is no intermodulation between the pilot tone and the input signal at the ADC drive amplifier. The rest of this paper is organized as follows. In Section 2, the sampling and reconstruction process of the classical DBI system is briefly described. Section 3 discusses the problems existing in the classical DBI system. In Section 4, the synchronous mixing architecture is proposed for the DBI system. The platform for evaluating the performance of the two architectures is built in Section 5. Section 6 gives some test results and the discussion. Finally, the conclusions are drawn in Section 7. Classical DBI Sampling System A classical two-channel DBI sampling system is shown in Figure 1. The input signal x(t) is bandlimited to π/T, and the output is a discrete-time sequencex [n]. H 1 (Ω) and H 2 (Ω) are the frequency responses of the analog analysis filter bank, which is usually realized by a diplexer. H 2a (Ω) and F 2a e jω are the frequency responses of the image rejection filter after the analog and digital mixers. F 1 e jω and F 2 e jω are the frequency responses of the interpolation filters. G 1 e jω and G 2 e jω are the frequencies of the reconstruction filters. In this section, it is assumed that the stopband attenuation of the image rejection and interpolation filter is large enough to completely eliminate the aliasing components. As shown in Figure 2, a real diplexer has a crossover region between the low-pass and high-pass channel. We define the stopband frequency point of the low-pass channel as Ω 1H , where Ω > Ω 1H , H 1 (Ω) = 0. Similarly, we define Ω 2L and Ω 2H for the high-pass channel. The crossover region is the frequency range between Ω 2L and Ω 1H . The signal in this range is sampled by both channels at the same time. In order to simplify the analysis of the spectrum shifting by the mixer, we split the spectrum of the real signal x(t) into positive and negative bands as follows: where X(Ω) is the Fourier transform of x(t). Similarly, we define H ± 2 (Ω) and H ± 2a (Ω). For digital filters, the range from 2kπ to (2k + 1)π are the positive band, and the rest are the negative band, where k is an integer. The analog angular frequency Ω and digital angular frequency ω satisfy ω = ΩT, where 1/T is the total sampling rate of the DBI system. cos ω c nx [n] Mag For the low frequency (LF) band, the output of diplexer x 1 (t) is sampled with a period of 2T, and then up-sampled by a factor of 2 to obtain y 1 [n]. The discrete-time Fourier transform (DTFT) of the sequence y 1 [n] is as follows: We define thatX(Ω) andH 1 (Ω) are the extension of X(Ω) and H 1 (Ω) with a period of 2π/T, where X(Ω) and H 1 (Ω) bandlimited to π/T [22]. Therefore, Equation (2) can be written as follows: In the classical DBI system, each sub-channel strictly satisfies the Nyquist sampling theorem, so Ω 1H is less thanπ/2T. (3) is the aliasing component caused by upsampling, which can be eliminated by the interpolation filter F 1 e jω . Then, we have the following: In Equation (4), F 1 e jω is reserved because the interpolation filter is not an ideal low-pass filter, and there is attenuation in the passband. For the high frequency (HF) band, in order for x 2 (t) to satisfy the sampling theorem, Ω 2H − Ω 2L < π/2T is required. The local oscillator (LO) frequency is chosen to create high-side injection for reducing the spurs generated by the mixer; therefore, Ω 2H < Ω c < Ω 2L + π/2T. The Fourier transform of x 2 (t) is as follows: The frequency of the digital mixer LO is ω c = Ω c T. In order to prevent the image generated by digital mixing overlapping the original signal, Ω c < Ω 2L /2 − Ω 2H /2 + π/T must be met. So, the DTFT ofx 2 [n] is as follows: where P 2 e jω is given by the following: The definitions ofH 2 (Ω) andH 2a (Ω) are the same as that ofH 1 (Ω). Finally, the output sequences of the high and low frequency bands are added together to producex[n]. Its DTFT can be written as follows: where For a continuous time signal x(t) bandlimited to π/T, when discretized by a single channel sampling system with a period of T, the DTFT of the output sequence is as follows [23]: Comparing Equations (8) and (11), it can be concluded that when the reconstruction filters G 1 e jω and G 2 e jω are designed so that S 1 e jω + S 2 e jω = 1, the DBI system is equivalent to a single channel sampling system. Considering causality in physical realization, the system should have a delay t d , so the perfect reconstruction equation becomes as follows: In DBI system, x 1 (t) is bandlimited to Ω 1H and x 2 (t) is bandlimited from Ω 2L to Ω 2H . Therefore, in the region [0, π), G 1 e jω and G 2 e jω need to meet the following: In the crossover region, where arg[·] means the unwrapped phase response. For more convenient implementation, G 1 e jω and G 2 e jω are divided into two stages in the actual DBI system [3], as shown in Figure 3. First, use G 1 e jω and G 2 e jω to correct the phase of the crossover region of the two channels to satisfy Equation (15). Then add the outputs together, and use G e jω to compensate the full frequency band amplitude and phase. Furthermore, the total delay of the low frequency channel is shorter, F 1 e jω and F 2a e jω have eliminated the alias. Therefore, in [19], let G 1 e jω be a cascade of a fixed delay and digital infinite impulse response (IIR) all-pass filter and G 2 e jω = 1 to simplify the implementation. In addition, the frequency-independent factors in Equations (9) and (10) are compensated by adjusting the ADC drive amplifier, which adds less of a noise floor than the digital compensation. Problems of Classical Architecture The classical DBI architecture can increase the sampling rate and bandwidth at the same time, but there are still some problems. This section describes them in detail. First of all, it requires a very large amount of calculation. As shown in Figure 1, there are five digital filters in a two-channel DBI system. Among them, filters F 1 e jω , F 2 e jω and F 2a e jω are used to eliminate the image generated after interpolation or mixing. In order not to add a non-linear phase, generally, a linear phase finite impulse response (FIR) filter is used. The order N of low-pass FIR filter can be estimated according to the Kaiser formula as follows [24]: δ 1 and δ 2 in Equation (16) are the ripples in the pass and stop band, respectively, and ∆ f is the normalized (by the sampling rate) width of the transition band. When we use a high speed 8-bit ADC, we can generally let δ 1 = 10 −3 , δ 2 = 10 −5 and ∆ f = 0.01, so the order of each FIR filter is N = 460. Ten years ago, the highest speed DBI system sampling rate was 80 GS/s [16]. At this rate, only the digital signal processing (DSP) performance required to implement the above three filters is 110 TMAC/s. The field programmable gate array (FPGA) with the highest DSP capability now has a performance of no more than 22 TMAC/s [25]. Therefore, the real-time processing of each sampling point is completely impossible in the classic DBI system. The DBI system usually uses a central processing unit (CPU) for the digital signal processing, and with an advanced trigger system, only a short sampling sequence after the trigger time is reconstructed and processed. As shown in Figure 4, the trigger time is generally random, and in the high frequency channel, it corresponds to the different phase of the analog LO. Therefore, it is difficult to determine the LO phase of digital mixing during reconstruction. Sampling Clock The classical DBI system uses a pilot tone insertion system to establish the synchronization between analog and digital LOs, as shown in Figure 5. This system includes analog and digital parts. In the analog domain, the analog LO and the sampling clock are multiplied by factors N and M from the same reference clock, and the value of the factor is determined according to different system requirements. The analog LO is divided into paths: one is to drive the mixer, and the other is divided by two. The divided LO passes through a band pass filter to eliminate harmonics and is inserted into the high frequency signal channel as a pilot tone. The high frequency input signal is down-converted by a mixer. The image is removed by a low-pass filter, and then passed through a band-stop filter with the same center frequency with pilot tone to eliminate the interference that may affect the subsequent pilot tone extraction in the digital domain. The down-converted signal and pilot tone are combined by the power combiner, then drive the ADC through the amplifier. In the digital domain, the sampling sequence is divided into two paths. One passes though the digital phase-locked loop (PLL) to obtain the phase information of the pilot signal, which is multiplied by 2 to obtain the phase of the analog LO, and then this phase is used to generate the digital LO. The other is interpolated and filtered by a low pass filter, and then a band stop filter is used to completely eliminate the pilot signal. Finally, the signal is up-converted by a digital mixer and anti-imaged to obtain the high frequency channel output. The digital PLL is realized by discrete Fourier transform (DFT), and its time complexity is O L 2 , where L is the sequence length. Increasing L can improve the frequency resolution, but it leads to a rapid increase in the computational complexity. In summary, the pilot tone insertion system is very complicated. At the same time, it also affects system performance and reduces the SFDR of the DBI system. In actual analog circuits, devices such as amplifiers, samplers and power combiners are not perfectly linear, and the value of 1 dB compression point is usually used to express its linear range [26]. The addition of the pilot tone reduces the range of the available linear interval of the high frequency channel. During the system's single tone signal test, the pilot tone and input signal are intermodulated in the power combiner, amplifier and the sampler of ADC to generate spurs and reduce the SFDR of the system. Synchronous Mixing Architecture In this section, the synchronous mixing architecture for DBI system is described in detail. Synchronous mixing means that the LO for mixer in high frequency channel is the same as the sub-channel sampling clock. In this architecture, the digital mixer for upconversion and the pilot tone insertion circuit for the synchronization of the analog and digital LO can be removed. First, we will explain why the digital mixer can be removed in the synchronous mixing architecture. Then, the reconstruction process of the signal is analyzed in detail. It is similar to a HFB system. The sampler can be represented as a multiplier and a continuous-to-discrete-time converter cascade, as shown in Figure 6. The input signal and a periodic impulse train are multiplied to obtain the following: Its Fourier transform is as follows: Equation (11) is the DTFT of x[n]. It can be concluded from Equations (11) and (18) that the sampling process uses a periodic impulse train to modulate the input signal and then normalizes the frequency axis. x s (t) An ideal periodic impulse train has an infinite number of harmonics, and modulation replicates the signal spectrum for an infinite number of times, as shown in Figure 7. We usually only pay attention to the baseband signal after sampling and extract it through a low-pass filter during recovery. Similarly, we can also design various types of band-pass filters to recover the signals mixed by various harmonics. In the digital domain, we can use interpolation and high-pass filters to reconstruct the signal in the corresponding frequency band. For example, if we want to determine the mixed signal of sampling clock 2π/T, we can interpolate the sequence x[n] by a factor of 2, and then pass it through a high-pass filter with a bandwidth of π/2. Figure 7. Signal spectrum after impulse modulation. The frequency bands in the dashed box are of interest after sampling in the synchronous mixing architecture. According to the above analysis, in the DBI system, we can use the sampler to upconvert, and then the digital mixer can be removed. In the high frequency band, we use synchronized clock signals to drive the analog mixer and ADC, which are the synchronized modulator and demodulator. Removing the digital mixer and anti-aliasing filter, the simplified DBI system is shown in Figure 8. [n] Now, we analyze how to design the correct reconstruction filters for the synchronous mixing DBI system. For the low frequency band, only the low-pass filter after interpolator is removed, so the DTFT of y 1 [n] still satisfies Equation (3). For the high frequency band, before y 2 [n], only the LO of analog mixer becomes cos(πt/T), so the DTFT of y 2 [n] is as follows: where Then, using the periodicity of the sampled signal, we have the following: Finally, the sequences of two channels are added to obtainx[n], and its DTFT should satisfy the following: Therefore, G 1 e jω and G 2 e jω should satisfy the following: T 0 e jω = P 1 e jω G 1 e jω + P 2 e jω G 2 e jω = e −jωt d T 1 e jω = P 1 e j(ω−π) G 1 e jω + P 2 e j(ω−π) G 2 e jω = 0 . where Equation (23) is similar to the perfect reconstruction (PR) equation in the HFB system [22], where T 0 e jω is called the distortion function, and T 1 e jω is called the aliasing function. The distortion function represents the amplitude and phase response of the entire system, and the aliasing function represents the aliasing caused by the input signal at the image frequency point. After we obtain the frequency response of the analog front-end circuit, we can refer to the design method of the synthesis filter bank in the HFB system to design G 1 e jω and G 2 e jω [27]. First, we solve Equation (23) at N equally spaced frequency points, then use inverse discrete Fourier transform (IDFT) to obtain the coefficients of the FIR filter. Finally, the coefficients are truncated by the appropriate window to obtain a filter of the specified length. t d is generally set to half of the length of the filter, and the value could be optimized by the Nelder-Mead simplex method in MATLAB [12]. The method of obtaining the response of the analog front-end circuit is given in Section 5. Evaluation Platform and Methods An evaluation platform is implemented in order to compare the performance of classical and synchronous mixing architectures, as shown in Figure 9. It can be divided into four parts: radio frequency (RF) front-end, data acquisition, digital signal processing and clock generation. The RF front-end includes the mixer, diplexer and other filters. Active mixer AD831 (manufactured by Analog Devices Inc., Norwood, MA, USA) is used for down-conversion of the high frequency signal. In order to reduce the harmonics generated by the mixer, there is a lumped resistance attenuator with 20 dB attenuation before it. As shown in Figure 10, the input diplexer is implemented by connecting two singly terminated 11th-order Chebyshev low-pass and high-pass filters in parallel. The singly terminated filters are designed in the ADS (Advanced Design System) software. The diplexers are implemented using LQW series inductors and GRM series capacitors from Murata Company and assembled on double-layer printed circuit boards. Other analog filters are also implemented in this way. This type of diplexer is called a contiguous diplexer because the low-pass and high-pass filters have a common 3 dB attenuation frequency [28]. In physical realization, due to the insertion loss of the lumped capacitor and inductor, the attenuation of the common frequency is generally greater than 3 dB. Additionally, due to the finite quality (Q) factor of the component, the stopband attenuation is also limited. This article defines a gain less than −50 dB as the stopband. Some key specifications of the diplexers for classical and synchronous mixing architectures are shown in Table 1. Low-pass Output High-pass Output Figure 10. Structure of the diplexer. Each 14-bit resolution ADC (ADC14X250) is driven in a cascade by a single-ended to differential amplifier (LMH5401) and a variable gain amplifier (LMH6401). Variable gain amplifiers are used to balance the gain difference between low-frequency and highfrequency channels. The total sampling rate of the system is 250 MSPS, so each ADC works at 125 MSPS. The sampling clock of ADCs is generated by PLL1 (LMK04828), which is driven by a programmable oscillator (LMK61E2), and the reference clock is 12.5 MHz. PLL1 also generates a logic clock for FPGA (XC7K325T-FFG900, manufactured by Xilinx Inc., San Jose, CA, USA) and drives PLL2 (LMX2572) and PLL3 (LMX2572) to generate the pilot tone (only for the classical architecture) and analog LO. All clocks are controlled and synchronized by PLL1. The ADCs, amplifiers, PLLs and reference clock are all manufactured by Texas Instruments Inc (Dallas, TX, USA). LMK04828 is a phase-locked loop chip that complies with the JESD204B standard. It phase locks to the reference clock and generates the device clock and SYSREF signal (a signal used for synchronization of multiple converters defined in the JESD204B standard). In addition, the reference clock buffer in LMK04828 outputs the reference clock to a multiple output buffer LMK00304, which drives two secondary phase-locked loops LMX2572 to generate the analog LO and pilot tone, respectively. The SYNC and SYSREF signal are generated by the same frequency divider circuit in the LMK04828. It can not only control the synchronization of two LMX2572 chips, but also synchronize the ADC sampling clock and the analog LO. In the classical architecture, the analog LO is set to 95 MHz. In the synchronous architecture, the analog LO is the same as the single ADC sampling clock at 125 MHz. Data are transferred between ADC and FPGA through the JESD204B interface, and then sent to the DDR memory (MT41K256M16TW, manufactured by Micron Inc., Boise, Idaho, USA) for buffering. The JESD204B core and the memory controller in the FPGA are connected by the AXI4-Stream bus, and data transmission is carried out through direct memory access. The computer obtains data through communication between the Vivado software on the computer and the Integrated Logic Analyzer (ILA) on FPGA. FPGA is only responsible for the synchronous reception and buffering of ADC sampled data. Digital filtering, digital mixing, pilot tone phase extraction and data reconstruction are all done on the computer using MATLAB software. For the classic architecture, F 1 e jω , F 2 e jω and F 2a e jω are designed using the Filter Designer toolbox. The key specifications of them are as shown in Table 2. The reconstruction filter design method of the classical DBI system is introduced in detail in [29]. For the synchronous mixing architecture, we need to know the analog frontend circuit frequency response P 1 e jω and P 2 e jω in order to design the reconstruction filters. We can use a series of tones of different frequencies to test the system, use DFT to analyze sequences y 1 [n] and y 2 [n], and obtain the amplitude and phase of the sampling sequence at the corresponding frequency point [30]. According to Equations (3) and (21) (without considering alias components), we compare the sampling sequence with the original signal, and calculate P 1 e jω and P 2 e jω . When using single tone test to obtain the system response, the sampling system under test should have a precise trigger circuit to determine the initial moment of the sequence so that the absolute phase response of the system can be obtained [31]. However, the system in this article does not include a trigger circuit. We design the reconstruction filter bank by obtaining the amplitude and relative phase response between the two channels through a single tone test, and then use the pulse signal test to adjust the overall phase response. We denote the amplitude response of the two channels as P 1 e jω and P 2 e jω , and the relative phase response as follows: where ∠P 1 e jω and ∠P 2 e jω are absolute phase response of P 1 e jω and P 2 e jω . The frequency independent gain factors are compensated by the input amplifier of the ADCs. The total delay of the system is d 1 . We use P 1 e jω , P 2 e jω and φ(ω) to solve Equation (23) to obtain the reconstruction filter response as follows:   Ĝ 1 e jω = e −jωd 1 P 2 e j(ω−π) e jφ(ω−π) /D 1 e jω Ĝ 2 e jω = −e −jωd 1 P 1 e j(ω−π) /D 1 e jω . From Equations (28) and (30), it can be concluded that signal reconstruction using reconstruction filters designed through the amplitude response and relative phase response is also alias free, the amplitude is also not distorted, and it is only phase ∠P 1 e jω away from the perfect reconstruction. We can calibrate this phase difference by a pulse test because the entire system is low-pass, and the phase response is linear at a very low frequency. After the pulse signal is input to the system, the output sequences of reconstruction filterŝ G 1 e jω andĜ 2 e jω are added together to obtain its DFT. We adjust the initial time of the sequence to make the phase of the fundamental wave the same as that of the input pulse, and then compare the phases of other harmonics to obtain the phase response to be compensated. The system design using the perfect reconstruction equation has a brick wall response, and its Gibbs effect is very obvious. The actual sampling system response types include Gaussian, maximum flat and Bessel responses [32]. The maximum flat response has the smallest oversampling rate and is commonly used in high speed acquisition systems. Therefore, in the synchronous mixing architecture, a digital filter is added after the phase compensation filter to adjust the system to the maximum flat response. In summary, the reconstruction process of the synchronous mixing system is shown in Figure 11. For the synchronous mixing architecture, a series of single tone tests with total of 200 frequency points are carried out. Then, 2048 points are obtained through spline interpolation to solve Equation (26) to obtain the response of the reconstruction filter. Finally, we use the inverse fast Fourier transform (IFFT) and rectangular window truncation to obtain 736 order FIR filtersĜ 1 andĜ 2 . Increasing the order of the filter does not improve the reconstruction effect much. In the pulse test, the response value to be compensated can be added toĜ 1 e jω andĜ 2 e jω ; then, we use IFFT to solve the newĜ 1 andĜ 2 . The advantage of this is that there is no need to increase the order of the filter for phase compensation. The response compensation factor can be multiplied by e −jωd 1 in Equation (26), which means that a brick wall response system is not needed. The above method is also used to obtain G in the classical architecture, as shown in Figure 3. The order of this filter is also 736. Figure 12 shows the prototype system under test. The arbitrary waveform generator DG4162 from RIGOL Technologies is used to generate single tone signal and pulse signals. Two power supplies, GPE-2323C and GPD-3303S, from Good Will Instrument power the mixer and other modules. Test Results and Discussion Using the single tone test method in Section 5, we obtain the frequency response of the analog front-end circuit of the synchronous mixing DBI system, as shown in Figure 13a. Using P 1 e jω , P 2 e jω and relative phase φ(ω), we solve the perfect reconstruction equation, and the resulting reconstruction filter bank frequency response is shown in Figure 13b. Figures 14 and 15 show the spectrum of the single tone test. We choose two cases where the frequency is in the stopband of the low-pass filter in the diplexer and near the common frequency of the diplexer. In the first case, almost all the power of the input signal enters the high frequency band of the DBI system, and the test result is shown in Figure 14. For the classical architecture, the frequency of the input signal is 83 MHz, and the power of the pilot tone is −32.5 dBm. For the synchronous mixing architecture, the frequency of the input signal is also 83 MHz. Figure 14a shows the various spurs of the classical architecture, where f in represents the frequency of the input signal, f LO represents the frequency of analog LO, f s represents the total sampling rate of the system, and f pilot represents the frequency of the pilot tone. The largest spur in Figure 14a is caused by the intermodulation of the pilot tone and the down-conversion output of the analog mixer. The SFDR of the system is only 52.9 dBc. Some other spurs are due to the finite attenuation of the analog and digital anti-aliasing filters. For the spurs generated by other nonlinearities or digital interpolation and mixing, the maximum value is about −70 dBFS. As shown in Figure 14b, the synchronous mixing architecture does not need the pilot tone, and its SFDR is much higher than the classical one, which is 72.2 dBc. At the same time, it does not use a digital anti-aliasing filters, but synthesizes the signals from two channels by perfect reconstruction, so some other spurs are also smaller than those of the classical architecture. In the second case, the power of single tone near the common frequency is approximately equally divided by the diplexer and then input to two channels. The test result is similar to the high frequency band, as shown in Figure 15. In Figure 15a, the largest spurious component is still produced by the intermodulation of the pilot tone and downconversion signal. In Figure 15b, a 63 MHz signal is input into the synchronous mixing architecture. The spurs with frequency (f s /2 − f in ) are caused by the coefficient truncation of the reconstruction filter. Figure 16 shows the SFDR and SNR of two architectures with 5 MHz to 90 MHz single tone input signals. When the frequency of the input signal is low, the power of the signal mainly enters the low frequency channel of the DBI system. The low frequency channels of the two architectures are the same, so the SFDR is also similar. The frequency of the input signal increases, and most of the power of the signal enters the high frequency channel. It intermodulates with the pilot tone in the classical architecture, which greatly reduces the system SFDR. This problem does not exist in the synchronous mixing architecture, and when the frequency of input signal is 60 MHz to 75 MHz, the response adjustment filter attenuates spurs. Therefore, the SFDR of the synchronous mixing architecture is improved by more than 20 dB, compared with the classical one in the high frequency band. At the same time, due to the reduction of spurs, the SNR of the synchronous mixing architecture is also improved about 2-3 dB at high frequencies. Figure 17 shows the amplitude and the step response of the classical and synchronous mixing architecture. In Figure 17a, the −3 dB bandwidth of these two architectures is 90 MHz. The synchronous mixing architecture has a maximum flat response response, and is bandlimited to half of the total sampling rate, while the classical architecture is bandlimited to the frequency of LO. Due to the narrow transition band of the digital antialiasing filter, the amplitude response drops faster in the classical architecture. Therefore, more high frequency components of the step signal enter the synchronous mixing architecture. The step response of the two architectures is shown in Figure 17b, and the rise time is shown in Table 3. The step signal with about a 5 ns rise time is input into the two architectures, and the 200 rising edges are averaged. The rise time of synchronous mixing architecture is 0.578 ± 0.070 ns faster than the classical one. Table 4 shows the performance and cost comparison between the two architectures. During digital signal processing, the multiplication operation consumes the most computing resources. For the signal reconstruction of the above two architectures, it is mainly reflected in the order of the FIR filter and the use of DFT to solve the pilot tone phase of each sampling sequence for digital mixing. For the classical architecture, the total order of the FIR filter is 2124 (476 + 477 + 435 + 736), and this value is 1472 (736 + 736) in the synchronous mixing architecture, which is reduced by 1/3. The pilot tone in the classical architecture is 47.5 MHz, and a DFT of 10,000-point is used to analyze its phase, which can achieve a resolution of 0.025 MHz at a 250 MHz sampling rate. At the same time, an additional PLL, analog filter and power combiner are used to generate the pilot tone. These two items are not needed in the synchronous mixing architecture. Compared with the classical architecture, the only disadvantage of synchronous mixing is that it requires the bandwidth of the ADC to reach half of its sampling rate. It may not be possible for some ultra high speed ADCs, using complementary metal oxide semiconductor (CMOS) technology [33,34]. The bandwidth of the ADC only needs to reach half of the analog LO frequency under the classical architecture. In this case, we can reduce the sampling clock of the ADC to achieve synchronous mixing. As shown in Figure 17a, the signal with a higher frequency than LO cannot be sampled by the classical DBI system. However, a smaller oversampling rate will increase the overhead of the digital signal processing. Conclusions A new synchronous mixing architecture for the DBI system is proposed in this paper. Compared with the classical architecture, it does not require a pilot tone insertion system, digital interpolation filter, digital mixer, or digital anti-image filter, greatly simplifying the hardware and software structure of the DBI system. In addition, we have also built an evaluation platform with a 250 MSPS sampling rate and 14 bit resolution to test the performance of these two architectures. The test result shows that the SFDR of the new architecture improves by more than 20 dB in the high frequency band, compared with the classical architecture. The rise time of the step signal is also increased by 0.578 ± 0.070 ns with the same −3 dB bandwidth. In addition, the total order of the FIR filter is reduced by 1/3, and the 10,000-point DFT calculation for the pilot tone phase analysis is removed during reconstruction of each sampling sequence. In general, the new architecture has advantages over the classical one in term of the implementation cost and performance. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
8,829.8
2021-08-18T00:00:00.000
[ "Computer Science" ]
Benefits of Pod Dimensioning With Best-Effort Resources in Bare Metal Cloud Native Deployments Container orchestration platforms automatically adjust resources to evolving traffic conditions. However, these scaling mechanisms are reactive and may lead to service degradation. Traditionally, resource dimensioning has been performed considering guaranteed (or request) resources. Recently, container orchestration platforms included the possibility of allocating idle (or limit) resources for a short time in a best-effort fashion. This letter analyzes the potential of using limit resources as a way to mitigate degradation while reducing the number of allocated request resources. Results show that a 25% CPU reduction can be achieved by relying on limit resources. I. INTRODUCTION T HE PROVISIONING of services in today's communication paradigm generally involves two main entities, the cloud provider and the service provider. Cloud providers are responsible for the cloud infrastructure maintenance and updates, including managing physical (e.g., CPUs) and virtual resources (e.g., Virtual Machines (VMs), containers). These resources are usually rented out to service providers upon request. The cloud provider charges the service provider based on the amount and time resources are reserved/used. Service providers then use the rented VMs/containers to deploy end-user applications. Cloud native technologies (e.g., containers) allow to easily develop, deploy, and manage services in the cloud [1]. Many networking applications such as monitoring, automation, Radio Access Network (RAN) and core virtualization have been shown to benefit from using cloud native technologies and containers [2], [3], [4], [5]. Cloud native services are handled by cloud orchestrators like Kubernetes (K8s), a widely used open-source container orchestration platform [6]. In K8s, each service runs on a set of Pods. Each Pod is a collection Manuscript of one or more containers, with a given amount of resources (e.g., memory, CPU, storage). The number of running Pods can be scaled (i.e., increased or decreased) over time to allow a service provider to match the time-varying end-user demands. Pods can be deployed in VMs ( Fig. 1(a)) in an infrastructure-as-a-service fashion, where each service provider needs to rent as many VMs as needed to compose its services. This approach ensures hard isolation of resources among different service providers, which need to pay for all the resources associated with the VMs, regardless of the number of running Pods. To fully take advantage of cloud native technologies, Pods of different services can be deployed directly over a common bare metal infrastructure without the need for a virtualization layer ( Fig. 1(b)). This approach reduces the performance penalties introduced by hypervisors (e.g., for disk and network input/output operations) [7] while simplifying service deployment and operations [3]. Additionally, service providers can rent resources in a Pod-as-a-service fashion, paying only for what is needed to deploy and operate their Pods. Finally, soft resource isolation is also offered, allowing the use of both guaranteed (referred to as request) and shared (referred to as limit) resources. By doing so, idle (i.e., not used) resources initially set aside for one service can be used by Pods of another service when needed, in a best effort way. Service providers decide the amount of resources assigned to each Pod they rent. If at any point in time, the resources/Pods are under-dimensioned and can not satisfy the end-user demands, the latter may experience degradation, e.g., an increased application response time, leading to a potential loss of revenue for the service provider [8]. On the other hand, if resources/Pods are heavily over-dimensioned, the service provider will pay for resources that are most of the time unused. Leveraging soft isolation might provide a third and interesting opportunity. A service provider can avoid 2576-3156 c 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. overprovisioning by counting on the use of limit resources whenever needed. Since these resources are paid only when used, there is an evident advantage in terms of cost savings. On the other hand, since limit resources are not guaranteed, a service provider might face the possibility of higher degradation fees. For this reason, it becomes crucial to investigate what is the potential cost vs. benefits of soft isolation. Different techniques for the Pod dimensioning have been proposed in the past, considering mainly request resources and different scaling thresholds in VM-based deployments or bare metal deployments with only hard resource isolation [9], [10], [11]. The scaling can be based on machine learning and prediction techniques [12], [13], and include also application-related metrics (e.g., response time) [14], [15], [16] to improve the service performance. All these works focus on hard isolation and do not investigate the possibility of using soft resource isolation and limit resources to mitigate service degradation and reduce the overall costs. In this letter, we focus on the Pod dimensioning problem in bare-metal deployments, where the use of limit resources among Pods of different services is allowed. By leveraging on this, we analyze the potentials and limitations of using limit resources to reduce degradation without the need for over-dimensioning, by means of simulations. A cost analysis is performed by comparing this approach against a traditional scaling strategy relying only on request resources and shows when it is beneficial for a service provider to leverage limit resources. II. SCENARIO DESCRIPTION AND USE CASE EXAMPLE In the following, we focus our attention on bare metal deployments of Pods handled by K8s ( Fig. 1(b)). In K8s, resources such as CPU and memory are assigned by means of resource request and limit [17]. Request is the amount of guaranteed resources that each Pod can access at any time during its operation (hereinafter referred to as request resources). K8s assigns Pods to nodes based on the amount of request resources and the availability of resources on the node. A service provider pays for this type of resource even if they are not fully used by the running Pods (i.e., they are idle). On the contrary, limit is the amount of resources that are accessed on a first-come-first-served basis, in a best-effort fashion, only when two conditions are met: (i) a Pod needs more than the request resources, and (ii) there are unused resources at the node. We refer to this amount as limit resources. Limit resources usually take advantage of unassigned resources at a node, or unused request resources (idle) left free by other Pods. The amount of limit resources that Pods can access depends on the resource contention level at the node, which varies over time and depends on the amount of available resources, deployed Pods, and service requests. The service provider pays for limit resources only when accessed and for the time and quantity that has been used. Service providers must solve the Pod dimensioning problem, i.e., to define the Pod's size (i.e., in terms of request and limit resources), the Pod's scaling parameters (e.g., desired average CPU usage), and the minimum and the maximum number of replicas. Pods can be replicated during the service operation to allow a service provider to match the time-varying needs of its end-users. Scale-out and scale-in operations rely on the monitoring capabilities of K8s and its built-in Horizontal Pod Autoscaler (HPA). The HPA uses periodical measurements related to a specific metric (e.g., CPU and/or RAM usage) from each one of the Pods [18] and computes the number of Pod replicas needed as follows: where dR is the (new) desired number of replicas, cR is the current number of replicas, cMV is the current metric value, and dMV is the desired metric value (as specified by the service provider). In this letter, we consider CPU as the resource and CPU usage as the metric to drive the scaling operations. In this case, cMV is the average CPU usage over all current Pods and dMV is the scaling threshold. In K8s, the threshold is indicated as the percentage of the request, and can be easily converted into the corresponding CPU amount dMV. To avoid frequent scaling operations, K8s establishes a default tolerance value by which the system does not scale if 0.9 < cMV/dMV < 1.1 When scaling is triggered, some time is needed to adjust to the new desired state (i.e., to reach cR = dR). This time, referred to as scaling delay, is required to create/terminate Pods, update load-balancing components, and set up the service(s) within the Pod. Ideally, a service provider would like to dimension and scale the number of running Pods in a way that allows renting just enough resources to match the CPU demand (i.e., the CPU required by the service provider to provide the services to the end users) over time while avoiding too many idle resources (i.e., CPUs paid for but unused) and degradation (i.e., number of CPUs that could not be allocated to Pods, e.g., due to lack of resources). However, the scaling delay makes it difficult to always match the CPU demand, thus generating degradation for the users. As an example, let us consider the CPU demand over time shown in Fig. 2(a). When the service is running, the number of Pods is adjusted by the HPA, and CPU allocation can be categorized as degradation, used, or idle. Fig. 2(b) shows the CPU allocation for a simple dimensioning case with 1 CPU request per Pod and without the possibility to use limit CPUs. The effect of the scaling delay (assumed to be 4 Time Units (TUs) in this case) can be observed when the CPU demand increases (Fig. 2(c)). At time step 93, 4.8 CPUs are used in total while 0.2 CPUs are idle. The threshold, set 4.25 CPUs, is exceeded, and cMV/dMV > 1.1. As a consequence, the desired number of replicas is updated using (1), triggering a scale out of 1 extra replica. Due to the scaling delay, this new replica is available at time step 98. The demand continues increasing and in the period between time steps 94 and 98 it exceeds the request CPUs, resulting in degradation. Different countermeasures can be taken to mitigate degradation, depending on the level of degradation that each service can accept. During the dimensioning phase, the amount of request resources assigned to each Pod can be set to a higher value, allowing each Pod to access more resources. Another option is to select a low(er) scaling threshold, anticipating the scaling out process. However, both options lead to Pod overdimensioning, potentially resulting in a higher amount of idle resources that must be anyway paid for. Another possibility is to allow the use of limit CPU resources. Considering the example in Fig. 2, the area representing the degradation could be replaced, in part or in full, by limit CPUs. A service provider could bet on their availability during operation to help reduce degradation, thus potentially lowering the amount of needed request CPUs. III. NUMERICAL RESULTS In order to evaluate the benefits of using limit resources while solving the Pod dimensioning problem, we developed a custom framework written in Python. The framework reproduces the HPA behavior explained in the previous section, with the following general assumptions. We assume a discrete amount of time instants in which we evaluate the service provider CPU demand, to be divided among the active Pods. We discretize the time in TUs that represents the monitoring cycles performed by K8s to obtain the metrics from the Pods and take actions (e.g., scaling). The CPU demand is considered to be an average over a time interval and evaluated at each cycle. This simplification is needed to avoid heavy simulation of run-time CPU resources scheduling, and it allows to measure average CPU degradation and idle resources in a similar fashion as K8s monitoring cycles. A. Simulation Settings We consider a service provider with a CPU demand over time according to the workload pattern shown in Fig. 2(a). This demand was gathered from Swedish University Network (SUNET) [19], converted into CPU load and augmented to mimic a real-world application. The traffic profile divides the 24 hours of a day into 1530 TUs. The obtained results are an average of 10 days. At each simulation, we add to each sample in Fig. 2(a) a random uniform value in the interval of ± 20%. We assume a service composed of a single Pod type, with two replicas deployed as the minimum number at time 0. At each time step, the CPU demand is split equally among the running replicas, simulating a perfect load balancing scheme. The number of replicas is also calculated at each time step according to (1), and the scaling delay is set to 4 TUs. Moreover, resources are measured in CPU × TUs. We use as a baseline a hard isolation deployment scheme over bare metal with the following configuration. The request CPU is set to 1 and no limit resources can be used, while the scaling threshold is set to 85% of the request CPU. We then analyze separately the two methods to mitigate degradation based on a higher threshold and a larger CPU request. The considered scaling thresholds are 85%, 75%, and 60%, while the Pod request values (hereinafter referred to as sizes) are 1, 2, 5 CPUs. To assess the potential benefits of limit resources, we simulate the case in which Pods can use as many limit resources as necessary, as long as they are free. A parameter α is used to represent the amount of available limit resources as a portion of the request resources allocated for each Pod. For example, if α = 10% and the CPU request is 1 CPU, each Pod can access up to 110% of the CPU request resources, i.e., 1.1 CPU. Fig. 3(a) shows degradation experienced by a service as a function of different scaling thresholds. We assumed that the Pod size is 1 CPU. We observe that lowering the threshold results in a lower degradation (i.e., from 726 to 30 [CPUxTU] when the threshold goes from 85% to 60%). This can be expected. With a more conservative value to trigger the scaling, Pods can access more CPU resources during the scale out process. However, there is a price to pay in terms of CPUs that stay idle (Fig. 3(b)). 12031 extra [CPUxTU] are required to reduce the degradation from 726 to 30 [CPUxTU], which is equivalent to 40% extra request resources required to reduce degradation by 96% with respect to the benchmark case. However, 94% of these extra resources are idle, as they are requested by the extra Pods, but not used most of the time. Fig. 4(a) reports the degradation for different (request) Pod sizes, with a fixed scaling threshold of 85%. Allocating Pods with more resources results in lower degradation. Fig. 4(b) reports the corresponding total resources. When the Pod size increases from 1 to 5 CPUs, the amount of total resources increases by 2822 [CPUxTU]. This corresponds to 9% extra request resources to reduce degradation by 52% with respect to the benchmark case. B. Resource Usage and Degradation Analysis In the following, we investigate whether limit resources can be used to mitigate degradation without impacting significantly idle resources. Fig. 5(a) depicts the degradation for different amount of limit resources that each Pod can access (α). The Pod request is 1 CPU and the scaling threshold 85%. When α=10%, the degradation decreases by 60% down to 294 [CPUxTU] with respect to the case with α=0%. When α=50%, the degradation is 17 [CPUxTU], a value similar to the case with a 60% threshold in Fig. 3(a), and much lower than the values in Fig. 4(a). Fig. 5(b) reports the total resources assigned to the service for different values of α. The total amount slightly increases with α, due to the larger amount of resources that can be accessed. When α=50% the total resources are 31858 [CPUxTU], adding only 1464 extra [CPUxTU] out of which 865 are request (used+idle) and 599 are limit. In this case, only 5% extra total resources are needed to reduce degradation by 98% with respect to the benchmark case. Compared to the value obtained with a threshold of 60% (Fig. 3(b)), 10566 [CPUxTU] resources can be saved, i.e., an improvement of 25%. These results show that using limit resources is potentially more resource efficient than relying on a high number of request resources. A service provider, instead of using a 60% threshold, could bet on having access to the required limit resources (α), limiting the extra resources assigned to Pods. However, relying only on limit resources does not give any guarantees, since the number of limit resources depends on the actual level of resource contention at the nodes where the Pods are running. Therefore, this approach is viable for service providers that want to improve service performance at a low cost, but can also accept some degradation. Conversely, service providers with strict constraints on degradation should rather rely on the resource over-provisioning strategies, such as using lower thresholds or larger Pod sizes. The actual level of resource contention is not under the control of the service provider as it depends on the effects of the runtime dynamics (e.g., how Pods are deployed, spikes in the CPU demand). The analysis of these aspects requires dedicated studies and is outside the scope of this letter. To analyze the effects of the scaling delay on the results, we simulated two additional cases, i.e., when the delay is 2 and 8 [TUs]. Figs. 6(a) and 6(b) report the degradation and total resources (in [CPUxTU]) for different scaling delays, when α = 0% and α = 50%, the Pod size is 1 CPU and the scaling threshold is 85%. Results show that a lower scaling delay corresponds to a lower degradation. This is due to a faster response to CPU demand variations, increasing total resources. By using soft isolation, degradation is compensated with a small number of limit resources regardless of the scaling delay value, which confirms the effectiveness of this approach. C. Cost Analysis To analyze the costs, we compare an approach based on limit resources (lim) with one that uses only request resources (req) and one that just accepts degradation (deg). The lim approach is beneficial if its cost (C lim ) is lower than or equal to the deg cost (C deg ) and the cost of the req solution (C req ). The following system holds: Each cost depends on the amount of request resources (R), limit resources (L), and degradation (D) and their unitary price (p R , p L , and p D , respectively). For the lim case we have C lim = R lim p R + L lim p L + D lim p D . Similar formulations can be derived for C deg and C req . We can use (2) and (3) to determine the values of p R , p L , p D for which the use of the lim approach is beneficial. As an illustrative example, let us consider the case with α=50% in Fig. 5 as lim, the benchmark as deg and the case with 60% threshold as req (first and last bars in Fig. 3, respectively). The related amount of resources is reported in Tab. I. Let us consider the worst case for lim, i.e., the equalities in (2) and (3). By solving (2) for p D and substituting it in (3) we find that p L = 18.02p R . Lower values of p L make the lim approach appealing for a service provider compared to the req approach. Similarly, we can solve (3) for p R and substitute it in (2) to find that p L = 1.04p D . For lower values of p L , using lim is more beneficial than deg (i.e., compensating degradation with limit resources is less costly than accepting it). IV. CONCLUSION AND FUTURE WORK This letter presents a performance analysis of different Pod dimensioning strategies in cloud native scenarios from a service provider perspective. A simulator has been developed to mimic K8s behavior and evaluate the performance of these strategies in terms of degradation and idle CPUs. Results show that degradation can be mitigated by using limit resources under the assumption that additional α% limit resources can be accessed by the Pods. In particular, a strategy based only on limit resources with α=50% can achieve the same level of degradation as a conventional strategy with a scaling threshold fixed to 60%, while requiring 25% less reserved CPUs. Moreover, savings can be achieved if the unitary price of a [CPUxTU] of limit resource is lower than 18 times the price of a request [CPUxTU] resource. Since limit resources are not reserved, this approach is viable for service providers that can tolerate some degradation in case the limit resources are not available. Nevertheless, intelligent Pod scaling strategies can be developed to compensate for this, e.g., by monitoring degradation and adding request resources only when needed. Studies from a cloud provider perspective on how to price resources and how to increase resource sharing are left as future work.
4,867.8
2023-03-01T00:00:00.000
[ "Computer Science", "Engineering" ]
MODELING THE EFFECT OF STOCHASTIC DEFECTS FORMED IN PRODUCTS DURING MACHINING ON THE LOSS OF THEIR FUNCTIONAL DEPENDENCIES Modeling the effect of stochastic defects formed in products during machining on the loss of their functional dependencies. The article investigates the influence of hereditary defects formed in the surface layer of products from metals of heterogeneous structure on the quality of surfaces treated with finishing methods. The research is based on an integrated approach based on the results of the deterministic theory of defect development and methods of probability theory. The treated layer of the product is considered as a medium weakened by random defects that do not interact with each other, namely: structural changes, cracks, inclusions, the parameters of which are random variables with known laws of their probability distribution. The causes of structural changes, crack formation on the treated surface product depending on different types of probability distribution of dimensions are investigated: length, depth of defects, and their orientation. From these positions, technological possibilities of their elimination by definition of branch of combinations of the technological parameters providing necessary quality of the processed surfaces are considered. Modeling of thermomechanical processes in the treated surface containing hereditary defects is carried out based on thermoelastic equations with discontinuous boundary conditions in the places of their accumulation. The research used the apparatus of boundary value problems of mathematical physics equations, the method of singular integral equations for solving problems of fracture mechanics, Fourier-Laplace integral transformations for obtaining exact solutions, the method of constructing discontinuous functions. The dependences determining the intensity of stresses in the vertices of hereditary defects are obtained. A method for predicting the nature of crack formation depending on the probability distribution of defects, the values of heat flux entering the surface layer of the processed product has been developed. It is established that the increase in the homogeneity of the material leads to an increase in the value of heat flux, which corresponds to a fixed probability of failure. Introduction Purpose and objectives of the study This study aims to develop the theory and recommendations on technological methods of significantly reducing grinding defects such as burns and cracks in processing parts made of materials and alloys, the surface layer of which has hereditary defects of structural or technological origin. Achieving this goal requires setting and solving the following tasks: 1. To study the mechanism of formation of defects in the surface layer of parts made of materials and alloys prone to defect formation during their processing by grinding, considering previous operations and hereditary inhomogeneities that occur. 2. To develop a mathematical model that describes the thermomechanical processes in the surface layer during the grinding of parts of materials and alloys, considering their inhomogeneities that affect the formation of technological defects and determining the criteria for defect formation. 3. Check the adequacy of the results obtained on products made of materials prone to defects during processing by finishing methods. Materials and methods of research Theoretical research is carried out based on scientific bases of technology of mechanical engineering and thermophysics of mechanical and physical and technical processing processes, theories of thermoelasticity, the complex approach of modern deterministic theories of fracture mechanics, and methods of probability theory. In addition, the research uses the apparatus of boundary value problems of equations of mathematical physics and the method of singular integral equations for solving destruction problems. First, we need to build the first part of the model describing thermomechanical processes and the impact of stochastic defects formed in products on finishing operations on the loss of their functional properties, namely the equations describing thermomechanical processes in the treated layer of products on finishing operations. When choosing and substantiating the mathematical model, it was taken into account that both thermal and mechanical phenomena accompany the process of grinding parts. However, the predominant effect on the stress-strain state of the surface layer has temperature fields. Given that the bulk of the surface layer of the metal during grinding is in the elastic state, we can use the model of the thermoelastic body, which reflects the relationship of mechanical and thermal phenomena at the final values of heat fluxes. Since information on the propagation of temperatures and stresses along the depth and direction of movement of the tool is essential for the study of the thermomechanical state of polished surfaces, a flat problem is considered [4]. The influence of inhomogeneities in phase transformations of unstable structures, intergranular films, contour boundaries of hereditary austenitic grains, carbide stitching, nonmetallic inclusions, shells, flocs, and other defects arising in the surface layer and defects in the form of conditional cracks, which has the form (Fig. 1). The system of equations that determine the thermal and stress-strain state when grinding the surface of parts with coatings, the upper layer of which has inhomogeneities such as inclusions and microcracks, contains: The equation of nonstationary thermal conductivity: . τ Lame elasticity equation in displacements: where ( , , ) T x y τ is the temperature at the point with coordinates ( , ) x y and at any time τ ; a is the thermal conductivity of the material; t a is a temperature coefficient of linear expansion; , G µ are La-mé constants; , v ν are components of the displacement vector of the point ( , ) x y ; The initial conditions for this task can be taken as: ( , , 0) 0. Boundary conditions for temperature and deformation fields, taking into account heat transfer from the surface outside the area of contact of the tool with the part and intense heat dissipation in the processing area, are: ( , ) ; 0; ; where ( , ) q y τ is the intensity of heat flux, which is formed as a result of the interaction of the tool with the part; λ is the coefficient of thermal conductivity of the material to be ground; * 2a is the length of the contact zone of the tool with the surface to be treated; γ is the heat transfer coefficient with the environment; , x xy σ τ are normal and tangential stresses. The influence of the design parameters of the tool on the thermomechanical state of the surface layer is determined using the boundary conditions in the form of: v v t is the grinding modes, * 2a is the length of the arc of contact of the tool with the part; * l is the distance between the cutting grains. The maximum values of the instantaneous temperature M T , from single grains to the constant componentk T , were theoretically and experimentally confirmed. For the layer being processed and having structural and technological inhomogeneities, the discontinuity conditions of the solution, depending on the type of defect, will be: : : where , , σ , τ The problem's solution was carried out by the method of discontinuous solutions [21]. These solutions satisfy the Fourier equations of thermal conductivity and Lame elasticity everywhere except the defect boundaries. When crossing the boundary of the field of displacements and stresses suffer jumps of the first kind, i.e., their jumps , , σ , τ The problems of thermoelasticity (1) -(8) were solved using integral Fourier transforms on the variable γ and Laplace on τ to the functions ( , , ) T x y τ , , ( ) , x x y σ τ . Classical strength criteria evaluated the equilibrium state of the deformable surface layer. Of the available failure criteria that consider the local physical and mechanical properties of inhomogeneous materials, the most appropriate is the criteria of the force approach associated with the use of the concept of stress intensity factor (SIF). When the load leads to the fact that the stress intensity I K becomes equal to the limit value Ic K , the crack-like defect turns into the main crack [19,20,21,22]. The development of technological criteria for the control of defect-free grinding is carried out based on the established functional relationships between the physical and mechanical properties of the processed materials and the main technological parameters. The quality of the treated surfaces will be ensured if with the help of control technological parameters to choose such processing modes, lubricating and cooling media and tool characteristics that the current values of grinding temperature ( , , ) T x y τ and heat flux ( , ) q y τ , stresses M σ and grinding forces , x y P P coefficient Ic K do not exceed their limits. Implementation of the system of limiting inequalities by the values of the temperature and the depth of its distribution in the form of [9, 10]: which avoids the formation of grinding burns and can be the basis for the design of grinding cycles by thermal criteria. Processing of materials and alloys without grinding cracks can be provided to limit the limit values of stresses formed in the zone of intensive cooling of stress: In the case of the dominant influence of hereditary inhomogeneity on the intensity of the formation of grinding cracks, it is necessary to use criteria, the structure of which includes determining relationships of technological parameters and properties of the inhomogeneities themselves. As such, we can use the limitations of the stress intensity factor [23,24,25]: or providing with the help of technological parameters of the limiting value of heat flux, which maintains the balance of structural defects: Defective grinding conditions can be implemented using information about the material structure being processed. Thus, in the case of the predominant nature of structural imperfections of length 2l of their regular location relative to the contact zone of the tool with the part, you can use as a criterion the equilibrium condition of the defect in the form: where , , k p q t ν ν are the grinding modes; , D C are tool parameters; , a λ are thermophysical characteristics of the treated coating; c K is the crack resistance of this coating; G is the modulus of elasticity; ν is the Poisson's ratio; t a is the temperature coefficient of linear expansion; l is the characteristic linear size of the structural parameter (structure defect). These inequalities correlate the longitudinal characteristics of the temperature and force fields with the control and technological parameters. Furthermore, they specify the range of combinations of these parameters that meet the obtained thermomechanical criteria. At the same time, the properties of the processed material are taken into account, and product quality assurance is guaranteed. Based on the received criterion ratios, it is possible to provide the quality of a superficial layer of details at grinding, taking into account the maximum processing productivity. Next, we need to build the second part of the model, which describes the thermomechanical processes and the impact of stochastic defects formed in products at finishing operations on the loss of their functional properties, namely to calculate the probabilities and criteria of equilibrium of stochastically distributed defects in the surface layer. Modern materials used in technology have a complex structure formed by interacting particles. Depending on the scale of consideration (structural level), such particles can be atoms of different elements, vacancies, dislocations, and their networks, then -crystals, blocks of crystals, grains, polycrystalline aggregates, radicals, fibers, lamellar or three-dimensional inclusions, micro-and macrocracks. Particles may differ in chemical composition, physical properties, geometry, and relative position. Larger particles in an adequately ordered structure may contain more minor defects. As a result, there is a tremendous local heterogeneity of authentic materials. The reduction of the theoretical strength of authentic materials to the technical level is a consequence of defects -gaps in the continuity and homogeneity of the building, which arise from the formation of materials and their products (structural and technological defects). The complexity of deformation and destruction that occur in the microvolumes of the body does not allow taking into account the impact on these processes of all the imperfections characteristic of this material. These defects refer to different structural equations. Therefore, their effect on strength and fracture is different. In studying the causes of cracking during the grinding of materials, the most dangerous defects are considered -cracks (actually cracks, crevices, elongated sharp cavities) and sharp rigid inclusions. Such defects include extraneous elastic inclusions with very small or substantial elastic and strength characteristics compared to the characteristics of the base material (matrix). For example, low-strength graphite plate inclusions in the ferrite matrix of cast iron with some approximation can be interpreted as cracks. The detailed introduction of defects, given by their defining parameters, is the first feature of the material model under consideration. The second feature is related to the static nature of the defect risk distribution. denote the geometric parameters of defects of a specific r-th grade; r n is the number of defining parameters for a given defect, where r is the type of defect. They determine the defects' size, configuration, and location (orientation). For example, for isolated flat elliptical cracks or rigid inclusions, we have five independent parameters (two ellipse axes and three angular orientation parameters for circular defects -three (radius and two angular parameters)). As the strength of the continuum, take the value of the resistance of the material of development The appearance of these functions depends on the structure and technology of the material. Determinants can be stochastically independent or dependent. It, in particular, may be a consequence of the manufacturing technology (for example, in the thermo-machining of the alloy Alnico 8HC (R1-1-13) (Al -7%; Ni -14%; Co -38%; Cu -3%; Ti -8%; Fe -other), between size and orientation and inclusions, there is a specific correlation). In the case of stochastic independence of defect parameters, the total distribution of parameters is equal to the product of partial distributions of each parameter separately: Since, as accepted [26,27], the ultimate load for a body is equal to the ultimate load of its least strong element (weak link), the distribution function of the ultimate load 1 , ( ) n F P η for bodies of volume V can be found according to the formula for the distribution of the minimum member of the samples consisting of n elements of the general set of elements described by the function 1 The same result is obtained by determining the probability of destruction of at least one element in the set of 0 0 V n V defective elements, and the probability of destruction of each of them separately is equal to 1 1 , , ( ) F P η ξ at fixed 1 , , P η ξ . Formula (18) is used in many works on the statistical theory of brittle strength, based on the hypothesis of a weak link. The value of the function 1 , ( , ) n F P η ξ is equal to the probability P of local destruction of a body of size V under the action of a given homogeneous complex stress field: 1 1 ( , , ) ( , , ) . P P Fn P η ξ = η ξ (19) Determining this probability is one of the main tasks of choosing the technological parameters of defect-free processing. which leads to a Weibull-type distribution [26,27]: where 0 C > , 0 m > are limitations, which depend on the number of defects of the value determined experimentally for a given type of material and load. Determining these values on the basis of the function 1 ( ) n F P makes it possible to establish the parameters on which they depend, in particular, to establish their explicit relationship with the characteristics of material defects and the type of stress state that causes failure. The above scheme of determining the probability of destruction of the body under the action of a complex homogeneous stress state is valid in the case when defects of one kind weaken the body material. Moreover, if the body is weakened by defects of different varieties that do not interact with each other, the result is easily generalized. Let the material be weakened by defects ( ) γ r of different varieties. The average number of 0 r n defects of each variety per unit size 0 V is assumed to be known, where 1, 2, 3, , i = … γ . Then, similar to the previous one, we can determine the function 1 1 , ( , ) R F P η ξ of the strength distribution of elements weakened by one defect of each variety separately. We need to know the appropriate deterministic conditions for destroying an element with a defect of each variety and the probability distributions of the defining parameters of defective elements of each variety. Since the value of 1r F determines the probability of destruction of an element with one defect of this variety, and ( ( , , ) , Let us consider an example of the calculation of statistical parameters of destruction at thermal influence. Let the half-plane weakened by uniformly scattered random cracks that do not interact with each other be under the action of uniform heat flux of intensity q (Fig. 1). The laws of the joint distribution of the half-length k l and the angle of orientation k ϕ will be known. At a specific value of heat flux (let us call it the limited value * q ) develops at least one crack, i.e., the process of destruction begins. The condition for the development of a single crack with given geometric parameters is established in [26,27] by formula (22): We determine the probability distribution of the limiting heat flux and some of its statistical characteristics. From formula (22) it is seen that due to the randomness of the length l and the orientation of the cracks ϕ , which vary within some limits 0 1 ≤ ≤ α , π / 2 ϕ ≤ , the value of * q the limiting heat flux on the element, the half-plane with one crack is also random. The function * 1 ( ) F q of the probability distribution is found based on formula (20): where R is a two-dimensional domain of possible values of random variables and l in which the rela- is the density distribution of probabilities of length l and orientation φ of cracks. Assuming the values of l and ϕ are statistically independent ( , ) ( ) ( ) f l f l ϕ = ϕ ⋅ we obtain the formula: Here the Lϕ is the integration domain of all possible values of π 2   ϕ ϕ ≤     , which executes the relation: We assume that the distribution of cracks in orientation is uniform, ie, 3 π ( 1 ) f ϕ = , and in length has the form [29,30]: where S is the crack shape parameter that determines the structural heterogeneity of the material (the larger S , the more likely small cracks, ie the material is more homogeneous), a is the scale parameter or [28]: where r is the fracture parameter of the material (the larger r , the more likely small cracks). The distribution function ( ) 3 F l will be written: Substituting the expressions for 3 ( ) F l and 3 ( ) f ϕ in formula (23) we obtain: For distribution (27), when 0 1 d ≤ ≤ , 3 2 min q Aa − = and relation (28) holds for π / 2 ϕ ≤ ϕ ≤  , where: In this case, the function * 1 ( ) F q can be represented as: If the half-plane contains n cracks, we assume that the limiting heat flux for the half-plane is equal to the smallest value of the limiting heat fluxes of its elements (weak link hypothesis) [29,30]. Then the distribution function of the limiting heat flux of the half-plane with n cracks is determined by the following formula: The graphs show that the increase in the homogeneity of the material leads to an increase in the value of heat flux, corresponding to a fixed probability of failure. On the other hand, increasing S or r decreases the probability of failure, which corresponds to a given value of heat flux. An increase in the value of S or r leads to an elongation of the area of change of random heat flux, for which the probability of destruction is low. The magnitude of stress intensity coefficients for defects such as cracks is influenced by the size and orientation of these defects, the depth of their occurrence and mutual location, the magnitude of heat flux. The stochastic model of crack formation during the grinding of metals of heterogeneous structure is based on an integrated approach based on the results of the deterministic theory of the development of individual defects and methods of probability theory. The surface layer is considered as a medium weakened by random non-interacting defects -cracked inclusions, which determine the parameters of which are random variables with known laws of their probability distribution. The probability of destruction of the surface layer depends on different types of the probability distribution of dimensions/length, depth/defects, and orientation. The probabilistic characteristics of the limiting heat flux are considered from these positions. It is established that the increase in the homogeneity of the material leads to an increase in the value of heat flux, which corresponds to a fixed probability of failure. Conclusions The scientific problem of establishing the calculated dependences that determine the impact of hereditary defects from previous operations on the quality of the surface layer during grinding to create optimal technological processing conditions taking into account the accumulated defects and inhomogeneities in the surface layer of materials and alloys. The following results were obtained. 1. The influence of technological and structural heterogeneity of materials on the mechanism of origin and development of defects under the influence of thermomechanical phenomena accompanying diamond-abrasive processing is established. 2. The analytical model of the definition of a thermomechanical condition at grinding of details in which working surfaces contain inhomogeneities of a hereditary origin is developed. Based on this model, the functional relationships of quality criteria with the technological parameters that control are determined. 3. The stochastic model of the process of crack formation during heterogeneous structure materials grinding of is constructed based on which dependences of the probability of cracks on modes and other parameters of the process of grinding are received. It has been calculated that an increase in the homogeneity of the material leads to a decrease in the probability of defects such as cracks and, consequently, an increase in the grinding operation while maintaining the required quality. In combination with experimental studies, the obtained dependencies allow to theoretically determine the areas of combinations of technological parameters that provide the required quality of the treated surfaces.
5,339.6
2022-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Cheminformatics Modeling of Gene Silencing for Both Natural and Chemically Modified siRNAs In designing effective siRNAs for a specific mRNA target, it is critically important to have predictive models for the potency of siRNAs. None of the published methods characterized the chemical structures of individual nucleotides constituting a siRNA molecule; therefore, they cannot predict the potency of gene silencing by chemically modified siRNAs (cm-siRNA). We propose a new approach that can predict the potency of gene silencing by cm-siRNAs, which characterizes each nucleotide (NT) using 12 BCUT cheminformatics descriptors describing its charge distribution, hydrophobic and polar properties. Thus, a 21-NT siRNA molecule is described by 252 descriptors resulting from concatenating all the BCUT values of its composing nucleotides. Partial Least Square is employed to develop statistical models. The Huesken data (2431 natural siRNA molecules) were used to perform model building and evaluation for natural siRNAs. Our results were comparable with or superior to those from Huesken’s algorithm. The Bramsen dataset (48 cm-siRNAs) was used to build and test the models for cm-siRNAs. The predictive r2 of the resulting models reached 0.65 (or Pearson r values of 0.82). Thus, this new method can be used to successfully model gene silencing potency by both natural and chemically modified siRNA molecules. Introduction Specific gene silencing has shown great potential to elucidate gene function, identify drug targets, and develop more specific therapeutics than are currently available [1][2][3][4][5]. RNA interference (RNAi) is a post-transcriptional gene regulatory mechanism through which expression of a specific gene can be knocked down, which can be triggered either by short interfering RNAs (siRNAs) or by microRNAs (miRNAs) [6,7]. The miRNAs are endogenous noncoding RNAs; they usually bind target mRNAs with partial complementarity, mostly involving residues 1-8 (the seed region) [8]. Unlike siRNAs, which regulate mRNA levels through a cleavage event, miRNAs function by attenuating translation 8 . Here, we focus on siRNA-mediated gene silencing. A siRNA is a double-stranded RNA molecule consisting of 21-23 nucleotides (21-23 NTs) with 2-NT overhangs at the 3 ends and phosphate groups at 5 ends [9,10]. siRNA molecules can be exogenously introduced or generated by RNase III-type enzyme Dicer from long dsRNA or hairpin RNA [6]. The two strands of a siRNA molecule have sequences that are sense and antisense with respect to the target mRNA. Serving as the template for sequence-specific gene silencing by the RNAi machinery, the antisense strand of siRNA is also called the guide strand, while the sense strand is known as the passenger strand. Chowdhury et al. have recently described an approach to design siRNAs for silencing nucleocapsid phosphoprotein and surface glycoprotein genes of SARS-CoV-2, demonstrating the potential power of siRNA to combat emergent pandemics [11]. Other more advanced application of siRNA has also been described by Zhang et al., indicating the broad application potential of the siRNA technology [12]. An excellent example of achieving imaging-guided and tumor-targeted siRNA delivery and cancer treatment has recently been published by Guo et al. [13], highlighting the promising potential of siRNA for translational biomedicine. Those who are interested in broader applications of siRNA technology are referred to a recent comprehensive review by Weng [14]. There are many potential siRNA molecules that may be able to knock down a specific target mRNA: if there are N nucleotides in the target mRNA, N-20 siRNAs of 21 nucleotides can be designed by sliding a 21 nucleotides sequence along the entire length of the mRNA. However, only a fraction of these candidate siRNAs is highly effective in silencing the target mRNA [15,16]. Applying more than five different siRNAs may lead to the saturation of the RNA-induced silencing complex with the degradation of untargeted genes [17]. Therefore, selecting the few most effective siRNAs from the large number of candidate siRNAs is crucial for maximizing specific gene silencing and minimizing off-target effects. Toward this end, numerous algorithms have been developed to facilitate the rational design of siRNA. Two categories of approaches have been published: rule-based methods and machine learning-based methods. The rule-based methods exploited a variety of features that are more easily interpretable, such as the thermodynamic features describing binding free energies and the sequence stability [18][19][20], compositional features describing the occurrences of certain nucleotides at certain positions of the siRNA sequences [16,21,22], secondary structure characteristics of the mRNA target and the siRNA [23][24][25], and uniqueness of the target site [26,27]. These siRNA features have been reviewed [28][29][30], and methods based on these features are intuitive and beneficial to uncovering the fundamental requirements for active siRNA molecules. However, they often produced high levels of false positive predictions when tested on data from external sources [31,32]. More complicated factors are likely involved in RNAi, and cooperative interactions among the various factors may play an important role in determining the efficacy of siRNA-mediated gene silencing. To improve the prediction accuracy and model robustness, more rational approaches involving machine learning methods and multivariate data analysis emerged [33][34][35]. These algorithms have advanced the prediction tools in terms of quantitatively predicting siRNA potency in a genome-wide gene silencing study, and they based their predictions on such features as duplex stability, sequence characteristics, mRNA secondary structure, siRNA secondary structure, and the target site uniqueness. For example, He et al. [36] have described a study of a combination of nucleotide frequency, thermodynamic stability, as well as thermodynamic siRNA-mRNA interaction as a new kind of mixed features. These features work well for natural siRNAs and do not take the chemical structures into account. Jia et al. employed C-features that are based on frequencies of different combinations of nucleotide letters [37]. As with similar features based on nucleotide single-letter representation, frequencies of single-letter combinations do not reflect the chemical nature of nucleotides involved, especially when dealing with chemically modified nucleotides. Ayyagari [38] has recently described an in silico study using existing siRNA predicting services to design potential siRNAs against SARS-CoV2. This article described an excellent example to demonstrate the significance of siRNA prediction in cases of urgent needs. We note that in all these works, no chemical structural features of individual nucleotides (NTs) of a siRNA sequence have been encoded. Thus, they cannot be used to predict the potency of gene silencing mediated by chemically modified siRNA molecules, which provides solutions to many of the challenges facing siRNA therapeutics [39]. Tatabatake et al. demonstrated that chemical modification could improve the nuclease resistance of siRNA molecules, prolonging their activity [40]. Koller et al. suggested that chemical modification could increase the potency of the modified siRNAs [41]. To reduce or avoid immune-mediated and hybridization-dependent off-target effects, careful sequence design is an essential first step, while chemical modifications can provide further protection [39,42,43]. Here, we propose a novel approach that describes siRNA molecules in a more physicochemical way, and the models can thus predict the potency of gene silencing triggered by chemically modified siRNAs (as well as by natural siRNAs). While the sequence information of a siRNA is at the core of siRNA-mediated gene silencing, in this approach, we hypothesize that the potency of a siRNA is also strongly dependent on the chemical structure of individual nucleotides that compose a siRNA molecule. Thus, we can describe a siRNA molecule in terms of two crucial characteristics: we first numerically describe each nucleotide (NT) using a set of cheminformatics descriptors, which together should capture its physicochemical properties that might be correlated with molecular interactions, and then the whole siRNA sequence is described by simply concatenating all the descriptors in the order of the siRNA's sequence. In contrast, current literature methods do not encode the chemical nature of each nucleotide; instead, they simply use the one-letter representation of different nucleotides, e.g., counting the frequencies or thermodynamic stability based on local pairs of nucleotides. Since there are chemical similarities/differences among the nucleotides, encoding the chemical nature of each of the composing nucleotides can offer additional information to reflect the chemical similarity and dissimilarity when comparing different strands of siRNAs. The partial least square (PLS) regression [44] method is employed to develop statistical models of gene silencing potency. For regular siRNAs, the Huesken dataset (with 2431 siR-NAs) [33] was used to perform model building, validation, and comparative analysis. The results from this study were comparable with or superior to those from the Huesken paper [33] in terms of the Pearson coefficients; to model the chemically modified siRNAs, the Bramsen dataset (with 48 chemically modified siRNA molecules) [45] was used to build and validate the models. The predictive power of the resulting models has reached 0.64-0.65 (i.e., the Pearson coefficients ranged from 0.81 to 0.82). In the following sections, we first detail the methodology for descriptor generation and the modeling workflow, followed by results and discussion. The Huesken Dataset To build and validate models for the potency of siRNAs composed of natural nucleotides, 2431 siRNA sequences (antisense format) of the Huesken dataset [33] were used. The dataset is considered to be the first landmark data publicly available for siRNA gene knockdown experiments, targeting 34 different mRNAs. Both 19 NT sequences without the overhangs (2 NT), as well as the 21 NT sequences with overhangs, are available. Considering that the overhangs have an effect on the potency of gene silencing, the 21-NT sequences were used in building models in this study. The gene knockdown potency was expressed as normalized. Figure 1a shows the distribution of the gene silencing potency by the siRNAs in this dataset, indicating a wide potency range. Eight subsets of sequences were created according to Huesken et al. [33] for comparison purposes. These subsets are named All (2182), All human (1744), Human E2s (1229), Rodent (438), Random all (1091), Random all (727), Random all (545), and Random all (218). These subsets were used as training sets to build corresponding models. Four subsets were also created for use as the test sets, and they are All (249), All human (198), hE2 (139), and Rodent (51). Note that the numbers in the parentheses refer to the number of siRNA sequences in that subset. The Bramsen Dataset In order to predict the potency of siRNAs consisting of chemically modified nucleotides, 48 siRNA sequences targeting eGFP from Bramsen et al. [45] were used to build and evaluate models. There are 21 different types of chemical modifications involved in this dataset. The modifications can be broadly categorized into 2 -substituted RNAs, 4 -modified RNAs, locked RNAs, and RNAs with radical modifications of the ribose sugar ring [45]. For each modified siRNA molecule, the 21 NT antisense sequences with overhangs were used in this study. The gene knockdown potency was expressed as eGFP level, where a larger eGFP level indicates a poorer potency. Figure 1b shows the distribution of the gene silencing potency by this set of chemically modified siRNAs; clearly, the potency of this dataset is less diverse compared with the Huesken data. Overall Workflow for the Predictive Modeling of siRNA Potency The overall workflow for building predictive models for siRNA potency is shown in Figure 2. It involves the following major steps: (1) preparation of the siRNA dataset, where all the structures of the individual nucleotides involved are built and processed utilizing the MOE program, while the siRNA antisense sequences and the corresponding potency values are verified; (2) generation of the BCUT descriptors for each of the siRNA molecules in the dataset; (3) rational partitioning of the dataset into pairs of training and test sets; (4) building predictive models utilizing the partial least square (PLS) procedure as implemented in MOE; (5) validating the models, where each model is tested using the corresponding test set. It is worth noting that in some implementations of PLS, one uses LOO (leave-one-out) as the way to select the optimum number of principal components (PC) for developing final models. Alternatively, as implemented here in MOE, it is customary to scan the number of principal components (PC) to find the optimum models and use a second set as the validation set. In recent years, a second "external" set is often employed to further validate the models; however, it has been an accepted practice in the QSAR community to employ one training vs. one test set [46], which is employed in this study. Details of siRNA descriptor generation, the rational design of the pairs of training and test sets, model building, and validation procedures are as follows. Generation of the BCUT Descriptors for Each siRNA Molecule Specifically, we choose the 12 BCUT descriptors [47,48] implemented in the MOE software (Chemical Computing Group, Montreal, CA, USA), describing the charge distribution, hydrophobic property distribution, and polar property to characterize the chemical structure of each individual nucleotide. A 21-NT siRNA molecule is then numerically described by 252 BCUT descriptors. One straightforward approach to describing a siRNA molecule is to calculate typical cheminformatics descriptors used to describe small organic molecules. However, it often fails to capture the sequential information of polymeric molecules such as peptides and nucleotides-the repeated occurrence of the same chemical building blocks composing a polymeric molecule often leads to the degeneracy of descriptors. This issue had been discussed by Sandberg [49] as well as by Jonsson [50,51]. Thus, our approach to characterizing siRNA molecules is similar to those by Sandberg and Jonsson in their respective description of peptides and DNA molecules. The basic concept is that a polymeric molecule can be numerically characterized by molecular descriptors of its composing building blocks. Thus, a siRNA molecule can be described by molecular descriptors of its composing nucleotides (A, U, G, C, T, and their different chemical modifications). Specifically, each nucleotide position in a siRNA sequence can be translated into the corresponding descriptor values for the chemical structure of the NT. First, we calculate the 12 BCUT molecular descriptors for each of the nucleotides involved: for the Huesken dataset, BCUT for five natural nucleotides was calculated, while for the Bramsen dataset, BCUT for 98 chemically modified nucleotides was calculated employing the MOE program. The original siRNA sequences for both datasets and the corresponding BCUT values for involved nucleotides are provided in the Supplementary Materials. Then by concatenating the BCUT descriptors for composing nucleotides on the basis of a given siRNA sequence, the descriptors for a siRNA molecule are obtained. The procedure is depicted in Figure 3. Rational Design of Pairs of Training and Test Sets. The basic idea behind the rational design of a pair of training and test sets is that molecules in the test set should be properly represented by the molecules in the training set. Golbraikh et al. used what they called the sphere exclusion algorithm to select molecules for training and test sets [52,53]. Here, we adopted a well-validated clustering algorithm called ART-2a developed by Carpenter [54]. One advantage of the ART-2a algorithm is that it keeps updating the centroid of each cluster so that the centroid of each cluster faithfully represents the molecules in that cluster. In this work, the whole set of siRNAs is first converted to their numerical representation as described above. The data matrix was subject to ART-2a clustering. The vigilance parameter was adjusted so that the desired number of multimember clusters was obtained: 278 multimember clusters and 9 multimember clusters were obtained for the Huesken dataset and the Bramsen dataset, respectively. The test set molecules were then randomly picked to cover all the above clusters on the condition that the most potent siRNA and the least potent siRNA were not included in the test set. After the test set siRNAs were selected, the remaining siRNAs were used as the corresponding training set. The main reason for using chemical structure-based clustering for training-testing set design is to ensure the structural diversity of the siRNAs in the training set. Chemical structure-based diversity design is critical for training and testing set design, as discussed by Golbraikh [53]. To demonstrate the statistical confidence of the models, this process was repeated several times; as a result, multiple pairs of training and test sets were generated for model building and model validation: 30 pairs for the Huesken dataset and 30 pairs for the Bramsen dataset. Model Building Technique. Many different machine learning algorithms have been used in the modeling of gene silencing potency. The most popular ones are linear re-gression [35], Support Vector Machine (SVM) [26], Artificial Neural Network (ANN) [33], and decision tree [20]. In this study, the partial least square (PLS) regression method was selected to build the predictive models to avoid any potential overfitting issues facing many of the above algorithms. It generalizes and combines features from principal component analysis (PCA) and multiple linear regression analysis. This prediction is achieved by extracting from the predictors a set of orthogonal factors (a.k.a. latent variables or principal components) which have the best predictive power. The advantages of PLS include the ability to handle multi-collinearity among the predictors, robustness in noisy and missing data, and reduction in overfitting when the number of predictors gets too large. In our studies, the number of principal components was set as an adjustable variable scanned in each model development to avoid both overfitting and under-fitting issues for any given dataset. Model Validation Strategies. Validation is a crucial aspect of quantitative predictive modeling. We employed five analyses to ensure the quality of the built models as follows: 1. Correlation strength and predictive power. The correlation coefficient r (i.e., Pearson r) was computed to measure the correlation strength, and the predictive r 2 was used to measure the prediction power. The values of model development r (or r 2 ) were calculated on the basis of the actual potency and model-predicted potency for the training set siRNAs. They served as the necessary requirement for a reliable quantitative model. Testing r (r 2 ) was calculated on the basis the actual potency and model-predicted potency for the test set molecules. The value of r (or r 2 ) was viewed as another necessary requirement for a suitable predictive model [46]. Equation (1) was used to determine the value of r, and Equation (2) was used to compute the value of r 2 , applicable to both training and test sets. In the equations, y i and p i are the actual and predicted potencies, respectively; y and p are the means of the y i and p i , respectively. N is the number of siRNA molecules. 2. The effect of the number of principal components on the predictive models. The principal component analysis technique was used to extract a set of orthogonal factors that afford the best predictive power. The proper number of principal components is dependent on the size of the training set and the relationship between the descriptors. To establish the best models for a given dataset, we scanned the number of principal components to find the optimal numbers for use in the model. 3. The effect of data partitioning on the predictive models. To ensure the predictive power of the built models, we rationally split/partition the dataset into training and test sets. The training set was used to establish the models, and the corresponding test set was used to validate the models. The molecules in the test set were not involved in the model building; thus, the predictive r 2 calculated on the basis of the test set will more objectively indicate the true predictive ability. Different partitioning of training and test sets could give rise to models with different predictive powers, especially when the dataset is small, and we performed a series of computational experiments to find the best models. 4. The effect of training set size on the predictive models. The predictive power is strongly dependent on the size of the training set. Thus, different percentages of the original dataset were selected to be used in the training set. The ideal case was to use the least number of siRNAs to develop models, which are then used to predict the greatest number of siRNAs. We used the Huesken dataset to demonstrate this where 1%, 2%, 3%, . . . , and 90% of the whole dataset were used as the training set; and the models were used to predict the potency of 99%, 98%, 97%, . . . , and 10% of the remaining siRNAs, respectively. 5. Effect of random shuffling on model development. The predictive models built should faithfully reflect the intrinsic relationship between the descriptors and the gene silencing potency for a given dataset. A random dataset should not result in a predictive model. To prove this, we first randomly shuffled the potencies among the whole dataset, and then the models were built from these scrambled datasets. A different number of principal components was used to perform the PLS (partial least square) modeling. We should expect a dramatic decrease in the predictive power of the models built on the scrambled dataset. In addition, when models are used to predict truly unknown molecules, the applicability domain should be applied before the prediction is made. For example, one should employ the applicability domain as one of the filters, as recommended in the standard workflow protocol advocated by Tropsha [55]. Modeling of the Huesken Dataset BCUT Descriptors for Natural Nucleotides. The chemical structures of the five natural nucleotides (A, C, G, U, and T) were sketched; and the 12 BCUT descriptors were calculated: BCUT_PEOE_0, BCUT_PEOE_1, BCUT_PEOE_2, BCUT_PEOE3, BCUT_SLOGP_0, BCUT_SLOGP_1, BCUT_SLOGP_2, BCUT_SLOGP_3, BCUT_SMR_0, BCUT_SMR_1, BCUT_SMR_2, and BCUT_SMR_3. The BCUT_PEOE descriptors describe the charge distribution of a molecule. They are calculated from the eigenvalues of a modified adjacency matrix; the BCUT_SLOGP and BCUT_SMR descriptors characterize the hydrophobic property and the polarizability of a molecule, respectively. These two sets of descriptors are determined from the eigenvalues of their respective modified adjacency matrixes. Table 1 shows the values of the BCUT descriptors for the five natural nucleotides. Because the 12 BCUT descriptors represent the main features involved in intermolecular interactions, we replace the nucleotides in the siRNA sequences with their 12 BCUT descriptors; the numerical descriptors coupled with the multivariate analytical tool (PLS) should capture any correlation that might exist in the dataset. The Effect of Dataset Partitioning/Splitting on the Models. To avoid the bias of individual rational splitting of the dataset into training and test set, we performed 30 rounds of rational splitting; 30 pairs of training and test sets were thus generated. For each training set, we developed 15 PLS regression models corresponding to the number of principal components being "all", 1,2,3,4,5,6,7,8,9,10,11,12,13, and 14, respectively. Each of the 15 models was then used to predict the potencies of siRNAs in the corresponding test set. For comparison with the original Huesken results, we used the correlation coefficient r as the measure for the quality of models. For each number of principal components, the mean and the standard deviation of the Pearson correlation coefficients were calculated from 30 rational splits. As shown in Figure 4, the standard deviations were very small for each case, meaning that each of the 30 splits gives consistent results. Table 2 lists the related statistics to detail the information in Figure 4. The best model reported by Huesken et al. had a Pearson coefficient of 0.66 for one specific pair of training and test sets. This falls within our resulting range of 0.58-0.68 (for the test set), indicating that our method produced comparable models to Huesken's model BioPredsi [33]. According to a recent benchmark study by Matveeva et al. [35], BioPredsi was the best model tested thus far. Therefore, our new approach has demonstrated a performance similar to the best model reported in the literature. In all the reported cases, the Pearson correlation coefficient r has been used as the quality indicator. It has been reported [33] that the potency data experimentally tested on two different plates of siRNAs had a Pearson correlation coefficient of about 0.70. Thus, all the models with Pearson correlation coefficients of >0.60 were considered reasonable models in the context of these experiments. The Effect of Training Set Size on Model Quality. For a dataset as large and diverse as the Huesken dataset, there should be greater flexibility for the size of the training set (as well as the size of the corresponding test set) to be used. We set the training set size as the percentage of the whole dataset to be 1%, 2%, 3%, 4%, 5%, 10%, 20%, 30%, 40%, 50%, 55%, 60%, 65%, 70%, 75%, 80%, 85%, and 90%, respectively, to develop the models and predict the corresponding 99%, 98%, 97%, 96%, 95%, 90%, 80%, 70%, 60%, 50%, 45%, 40%, 35%, 30%, 25%, 20%, and 15% of the whole dataset. We set the number of principal components to be "all" for all the models for this experiment. The Pearson correlation coefficients of the models were calculated both for the training and the test sets. Figure 5 shows that if 20% or more of the dataset was taken as the training set to build the models and predict the remaining 80%, the resulting Pearson correlation coefficients were all greater than 0.60. All the models were regarded as acceptable. The Effect of Number of Principal Components on Model Quality. To obtain the best models for the Huesken dataset, we set the numbers of principal components of PLS models to be "all", 1, 2, 3, 4, 5, 6,7,8,9,10,11,12,13, and 14 to build models. These models were used to predict the potency of siRNAs in the corresponding test sets. To be consistent with the original work, we used 2153 siRNAs as the training set, and the remaining 278 siRNAs were used as the test set. Pearson r was computed for both the training set and the test set to measure the quality of the models. One pair of training and test sets was shown in Figure 6, while the other 29 pairs of training and test sets gave similar results. If the number of principal components is "all" or greater than 10, all the models were of acceptable quality. If the number of principal components was "all" or greater than 5, all the models also had Pearson r greater than 0.60. The Effect of Random Shuffling on Modeling. To ensure that the models faithfully reflect the intrinsic relationships between the descriptors of siRNA and the gene silencing potency, rather than finding spurious correlations, we randomly shuffled the potencies of the Huesken dataset. We tried to build models based on the scrambled dataset. Fifteen models were built with the number of principal components fixed to "all", 1, 2, 3, 4, 5, 6,7,8,9,10,11,12,13, and 14, respectively. As shown in Figure 7, the scrambled dataset did not result in any reliable models-the values of Pearson r for the scrambled data had significantly decreased (by 66-75%) compared with the original dataset. These results further supported the conclusion that our models were not spurious ones. Predictive Models Have Been Obtained. To demonstrate the quality of the resulting models, the scatter plots of actual against predicted potencies are shown in Figure 8a (of the training set) and Figure 8b (of the test set) for one of our final models. As mentioned in the discussion on data partitioning effect on model quality, other models also gave similar results with a standard deviation of 0.003 for the training sets and 0.02 for the test sets. The Pearson r is 0.67 for both the training set and test set, as shown in Figure 8. The RMSE (root mean square error) [56] values for both the training and test sets are 0.15. To compare the performance of our new method with that of the Huesken [33], we performed modeling of the exact same set of subsets as Huesken. Specifically, the same subsets of training and test molecules reported by Huesken were used to conduct our studies. The subset specification is given in Materials and Methods. Some of the subsets were based on species, such as human vs. rodent; others were based on genes (E2 sequences) as well as random selections of siRNAs. The results are shown in Table 3. The Pearson correlation coefficients in parentheses are quoted from Huesken or comparison. In some cases, the Huesken model (i.e., BioPredsi) performed slightly better; in other cases, our method outperformed BioPredsi. Overall, our approach is comparable with or superior to BioPredsi [35]. The predictive models developed on the basis of the Huesken dataset can be used in the virtual screening of potential siRNA molecules of natural nucleotides for a given target mRNA. Those siRNA molecules that have passed the Watson-Crick pairing with the target mRNA and are scored well by our models could be prioritized as top choices for focused RNAi experiments. These models should be useful in designing potential siRNA molecules or therapeutics against specific genes under study. Sequence-based Features and Critical BCUT Descriptors. As mentioned in the introduction, we hypothesized that both the sequence and the chemical structures of the composing nucleotides play a significant role in siRNA gene silencing potency. Thus, in addition to accurately predicting the potencies of siRNAs, we also proposed an analysis that can provide more insights into siRNA design. To be more specific, we performed relative importance analyses regarding the 21 nucleotide positions and the 12 different BCUT descriptors, respectively. We calculated the relative importance for the i-th descriptor as the normalized absolute value of the regression coefficients for the i-th descriptor in the PLS models. We calculated the relative importance for one of the 12 BCUT descriptors as an averaged value among 30 rational splits and the 21 nucleotide positions. Figure 9a shows the relative importance of the 12 BCUT descriptors for siRNAs of natural nucleotides. The charge distribution may be less important compared with hydrophobic distribution and polar property distribution of the natural nucleotides. Likewise, the relative importance for one of the nucleotide positions is averaged among 30 rational splits and 12 BCUT descriptors of the nucleotide. Here, nt1, nt2, . . . , and nt21 represent the nucleotides in the first, second, . . . , and 21st positions of a siRNA, respectively. The nt20 and nt21 are the overhangs. Figure 9b depicts the relative importance among the 21 composing nucleotides. It is consistent with previous publications [36,57] that the overhang has a noticeable contribution to the potency even though their contributions are not among the highest (see Figure 9a). Specifically, the descending order of relative importance among the 21 nucleotide positions is: nt1, nt2, nt7, nt11, nt19, nt3, nt14, nt6, nt4, nt12, nt17, nt18, nt15, nt9, nt21, nt20, and nt5. Modeling of the Bramsen Dataset Following the same approach detailed in Materials and Methods, we performed data preparation, model development, and model validation for the Bramsen dataset. To our best knowledge, this is one of the first modeling work for gene silencing potency by siRNAs of chemically modified nucleotides. Since this dataset is relatively small (48 chemically modified siRNAs), we decided to use more stringent criteria to measure the quality of models. Instead of the Pearson correlation coefficient alone, we also used predictive r 2 to judge the quality of the models. Specifically, we selected the models based on criteria of both Pearson r and predictive r 2 equal to or greater than 0.60 for the test sets. Following the same protocol as that for the Huesken dataset, we replaced each nucleotide with its 12 BCUT descriptors, resulting in 252 descriptors for a siRNA molecule. By rational splitting, we generated 30 pairs of training and test sets. The number of siRNAs in each training set and test set was 39 and 9, respectively. Given that there are 252 descriptors and 39 training set molecules, the number of principal components of PLS modeling will have a strong effect on the quality of models. To find models of best predictive power, for each of the 30 training sets, we examined the number of principal components (from 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10) to build the models, and the corresponding test set was used to validate the models. For the Bramsen dataset, if the number of principal components is smaller than 4, the resulting models were not predictive by our criteria, although the Pearson r and the predictive r 2 were good for the training set. When the number of principal components employed was too large, there was overfitting. After careful examination, we selected three reliable predictive models, each of which has a number of principal components of 4. The Pearson r for the test set was 0.82, 0.82, and 0.81 for model 1, model 2, and model 3, respectively. The RMSE values for the test set were 0.135, 0.136, and 0.136, respectively. The predictive r 2 for the test set was 0.63, 0.62, and 0.60 for model 1, model 2, and model 3, respectively. Figures 10-12 are the scatterplots for model 1, model 2, and model 3, respectively. Collectively, Figure 10 through Figure 12 show that the three models for chemically modified siRNA have both good training and testing quality. The original and predicted values for the three splits, as well as the source code for conducting rational splitting, are provided in the Supplementary Materials. On the basis of these results, we believe that our new approach could play a fundamentally important role in modeling gene silencing of chemically modified siRNA molecules, and it could facilitate the development of therapeutic siRNAs based on modeling chemically modified siRNA molecules. Conclusions To summarize, we have developed a new approach for quantitatively predicting gene silencing potency by siRNAs. By characterizing a siRNA molecule using both sequence information and the chemical structures of the composing nucleotides, the new approach has overcome the drawbacks of existing methods that cannot model the potency of gene silencing triggered by chemically modified siRNAs. Our approach has laid a general foundation for quantitative modeling of siRNA gene silencing potency, both natural and chemically modified. This new numerical description of siRNA molecules coupled with PLS (partial least square) affords predictive models of a dataset of 48 chemically modified siRNAs judged by both the Pearson correlation coefficient r and predictive r 2 obtained for both training and test sets. To demonstrate the general applicability of the new approach to siRNA modeling, we also built predictive models for a larger dataset with 2431 natural siRNA molecules. The performance of our models was comparable with or superior to that of one of the best models reported in the literature. The robustness of this modeling approach to siRNA-mediated gene silencing has also been established by a series of validation strategies that highlighted the effects of the number of principal components, rational splitting, and the training set size on the quality of models. To our best knowledge, a Web-based prediction tool [58] has been published that used nucleotide compositional patterns (but not chemical structures of nucleotide) as descriptors for the chemically modified siRNA modeling. Our new method is the first attempt to introduce cheminformatics descriptors to the modeling of chemically modified siRNA potency. We emphasize that our approach is complementary to other methods in that cheminformatics descriptors can capture the nature of nucleotide chemical structures as opposed to other methods describing nucleotides as arbitrary alphabets-the BCUT cheminformatics descriptors could capture the similarities and differences between different nucleotides in terms of their chemical and physical properties. Because chemical modification provides solutions to many of the challenges facing siRNA design, our new approach can be used to facilitate chemically modified siRNA design for therapeutic purposes. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/molecules27196412/s1, the original siRNA sequences for Huesken and Bramsen datasets, the corresponding BCUT values and the data related to the partitions and predictions are provided in respective Supplemental files.
8,267
2022-09-28T00:00:00.000
[ "Chemistry", "Biology", "Computer Science" ]
M6A-mediated upregulation of LINC00958 increases lipogenesis and acts as a nanotherapeutic target in hepatocellular carcinoma Background Long non-coding RNAs (lncRNAs) possess significant regulatory functions in multiple biological and pathological processes, especially in cancer. Dysregulated lncRNAs in hepatocellular carcinoma (HCC) and their therapeutic applications remain unclear. Methods Differentially expressed lncRNA profile in HCC was constructed using TCGA data. LINC00958 expression level was examined in HCC cell lines and tissues. Univariate and multivariate analyses were performed to demonstrate the prognostic value of LINC00958. Loss-of-function and gain-of-function experiments were used to assess the effects of LINC00958 on cell proliferation, motility, and lipogenesis. Patient-derived xenograft model was established for in vivo experiments. RNA immunoprecipitation, dual luciferase reporter, biotin-labeled miRNA pull-down, fluorescence in situ hybridization, and RNA sequencing assays were performed to elucidate the underlying molecular mechanisms. We developed a PLGA-based nanoplatform encapsulating LINC00958 siRNA and evaluated its superiority for systemic administration. Results We identified a lipogenesis-related lncRNA, LINC00958, whose expression was upregulated in HCC cell lines and tissues. High LINC00958 level independently predicted poor overall survival. Functional assays showed that LINC00958 aggravated HCC malignant phenotypes in vitro and in vivo. Mechanistically, LINC00958 sponged miR-3619-5p to upregulate hepatoma-derived growth factor (HDGF) expression, thereby facilitating HCC lipogenesis and progression. METTL3-mediated N6-methyladenosine modification led to LINC00958 upregulation through stabilizing its RNA transcript. A PLGA-based nanoplatform loaded with si-LINC00958 was developed for HCC systemic administration. This novel drug delivery system was controlled release, tumor targeting, safe, and presented satisfactory antitumor efficacy. Conclusions Our results delineate the clinical significance of LINC00958 in HCC and the regulatory mechanisms involved in HCC lipogenesis and progression, providing a novel prognostic indicator and promising nanotherapeutic target. Electronic supplementary material The online version of this article (10.1186/s13045-019-0839-x) contains supplementary material, which is available to authorized users. Introduction With 841,080 new cases and 781,631 deaths annually, hepatocellular carcinoma (HCC) ranks the sixth most commonly diagnosed malignancy and the fourth leading cause of death worldwide [1]. Despite great efforts dedicated in the therapeutic strategies for HCC over the past years, including surgical resection, liver transplantation, and comprehensive therapy, the 5-year survival rate of HCC patients remains dismal. Therefore, elucidating the molecular mechanisms underlying HCC and determining novel molecular targets are essential to develop effective treatment modalities for this deadly malignancy. Long non-coding RNAs (lncRNAs), a class of functional non-coding RNA transcripts > 200 nt in length, are engaged in diverse biological processes across every branch of life. Specific patterns of lncRNA expression coordinate cell differentiation, development, and pathogenesis. It has been widely recognized that many lncRNAs are dysregulated and play an important part in cancer progression [2]. In HCC, lncRNAs have been reported to affect various malignant phenotypes, such as cell proliferation, motility, and glucose metabolism reprogramming [3][4][5]. However, investigations of the involvement of lncRNAs in aberrant lipid metabolism in HCC are few. LncRNA-NEAT1 disrupts lipolytic enzyme ATGL-mediated lipolysis and drive HCC proliferation by binding miR-124-3p [6]. LncRNA HULC activates the acyl-CoA synthetase subunit ACSL1 in a miR-9dependent manner to promote lipogenesis and function as an oncogene in hepatoma cells [7]. Long non-coding RNA 00958 (LINC00958) is originally identified as an oncogenic gene in bladder cancer by Seitz et al. [8]. Subsequent studies demonstrated that LINC00958 is upregulated in several other malignancies, including glioma [9], oral [10], gastric [11], pancreatic [12], and gynecological cancer [13,14]. The involvement of LINC00958 in HCC has not yet been documented, prompting us to explore its biological functions and clinical value. Polymeric nanoparticle (NP) platforms are emerged as promising carriers in cancer therapy by delivering a variety of drugs, including small interfering RNAs (siRNAs). NPs prevent siRNAs from rapid degradation, increase the drug concentration at tumor sites, and enable sustained release [15]. NPs formulated with poly(lactic acid/ glycolic) (PLGA) copolymer are particularly attractive for clinical applications, due to their low immunogenicity, non-toxicity, biocompatibility, and biodegradability [16]. Poly(ethylene glycol) (PEG) is safe for clinical application and has been used in many Food and Drug Administration-approved medications including intravenous injections [17]. PEGylated PLGA NPs have been acknowledged as one of the best controlled release nanoplatforms for targeted drug delivery [18]. In the current study, we showed that LINC00958 was a lipogenesis-associated lncRNA that exacerbated HCC malignant phenotypes and independently predicted patient survival outcomes. Patient-derived xenograft (PDX) mouse models were adopted to evaluate the tumorpromoting role of LINC00958 in vivo. Mechanistically, METTL3-mediated N 6 -methyladenosine (m 6 A) induced the upregulation of LINC00958, which subsequently promoted HCC progression through the miR-3619-5p/ HDGF axis. We developed a novel PLGA-based si-LINC00958 nanoplatform and evaluated its superiority for the treatment of HCC. Patients and tissue samples Fresh tumor tissues and paired adjacent non-tumor samples were collected from 80 HCC patients who underwent surgical resection from January 2012 to December 2014 in the First Affiliated Hospital of Nanjing Medical University. The tissue samples were preserved in liquid nitrogen. All patients did not receive preoperative chemotherapy or radiotherapy and signed the written informed consents. This study was approved by the ethical review board of the First Affiliated Hospital of Nanjing Medical University. RNA immunoprecipitation (RIP) RIP assay was performed using a Magna RIP RNAbinding Protein Immunoprecipitation Kit (Millipore, Bedford, MA, USA) in accordance with the manufacturer's protocol. Cells were isolated and lysed by RIP lysis buffer and incubated with antibodies against AGO2 (Abcam, Cambridge, MA, USA), or m 6 A (Synaptic Systems, Goettingen, German) at 4°C overnight. IgG was used as negative control. The immunoprecipitated RNAs were eluted and analyzed by RT-qPCR. Biotin-labeled miRNA pull-down assay Cells lysates were harvested 48 h after transfecting with 50 nM of biotin-labeled miRNAs (GeneCreate, Wuhan, China). Streptavidin-coupled Dynabeads (Invitrogen) were washed and resuspended in the buffer. Then an equal volume of the biotin-labeled miRNAs was added in the buffer. After incubating at room temperature for 10 min, the coated beads were separated with a magnet for 2 min and washed three times. The isolated RNAs were then subjected to RT-qPCR analysis. RNA sequencing Total RNA was isolated from sh-NC (n = 3) and sh-LINC00958 (n = 3) HCCLM3 cells. RNA samples were analyzed by RNA sequencing (BGI, Shenzhen, China) based on the manufacturer's protocols. Briefly, BGISEQ-500 platform was used to sequence the samples for subsequent generation of raw data. Genes significantly differentially expressed between sh-NC and sh-LINC00958 cells were selected based on fold change ≥ 2.0 and P ≤ 0.001 using the DEGseq method. Functional pathway analysis was conducted using KEGG pathway enrichment analysis. PDX mouse model NOD/SCID and BALB/c mice were used for the establishment of the HCC PDX model. Briefly, we collected the primary HCC tissues from two patients after surgical resection and kept the specimens in iced culture medium supplemented with 1% penicillin/streptomycin. Then, the tissues were diced into 2-3-mm 3 pieces and subcutaneously implanted into the flanks of NOD/SCID mice. When the xenografted tumors grew up to 1-2 cm 3 , we harvested the tissues from the mice bearing PDX tumors and cut the tissues into pieces. The tumor fragments were further implanted into BALB/c nude mice for the serial transplantation. When the tumor volume reached 50 mm 3 , we intratumorally injected recombinant lentivirus vectors into tumor tissues continuously for 20 days. Tumor weight and volume were recorded. Preparation of PLGA-PEG(si-LINC00958) NPs We used the double emulsion solvent diffusion method for NP preparation as previously described [20]. si-LINC00958 was reconstituted in DEPC water and then mixed with spermidine (Sigma-Aldrich) at the N/P ratio (the ratio of polyamine amine groups to siRNA phosphate groups) of 8:1. The resultant mixture was incubated for 15 min at room temperature to form si-LINC00958/spermidine complex. PLGA-PEG-COOH (10 mg; DaiGang Biomaterial Co. Ltd., Jinan, China) was dissolved in 500 μl of dichloromethane (Aladdin Industrial Corp., Shanghai, China). Then, the above dichloromethane solution was added dropwise to si-LINC00958/ spermidine complex with a probe sonicator (VCX 130; Sonics & Materials, Inc., Newtown, CT, USA) in an ice bath. The resultant primary emulsion was further added dropwise to 4 ml of an aqueous phase containing 2.5% polyvinyl alcohol (Aladdin Industrial Corp.) and emulsified using probe sonication for 1 min. The second emulsion was then stirred at room temperature for 4 h to evaporate the organic solvent. Subsequently, the NPs were collected by centrifugation for 15 min and washed twice with DEPC water. We used dynamic light scattering (DLS) with a Nano Particle Analyzer (Zetasizer Nano ZSE, Malvern Instruments Ltd., UK) to investigate the size, zeta potential, and polydispersity index (PDI) of the NPs. A drop of the sample was placed onto a copper mesh and dried in room temperature to obtain transmission electron microscopy (TEM) images of the NPs. The siRNA encapsulated in PLGA was measured using UV spectrophotometry to determine the encapsulation efficiency as previously described [21]. In vivo antitumor efficacy and toxicity evaluation of NPs To investigate the suppressive effect of PLGA-PEG(si-LINC00958) NPs on HCC cell growth in vivo, PDX tumor models were created as described above. When the tumors developed to 50 mm 3 , PLGA-PEG(si-LINC00958) NPs or PLGA-PEG(siRNA control) NPs at a dose of 200 mg/kg were injected into the mice (n = 14 in each group) through the tail vein twice weekly. Treatment continued until 4 weeks later, at which point four mice in each group were sacrificed. Tumor weight and volume were recorded immediately. Tumors were subjected to subsequent RT-qPCR and western blotting analyses. The major organs, including the liver, kidney, lung, spleen, and heart, were harvested and fixed with 4% paraformaldehyde for further hematoxylin-eosin (H&E) examination. Blood alanine transaminase (ALT), aspartate transaminase (AST), creatinine (Cr), and blood urea nitrogen (BUN) were also analyzed. The remaining ten mice in each group were monitored for survival analysis with 10 weeks as the cutoff. Statistical analysis SPSS 24.0 (IBM Corporation, Armonk, NY, USA) and GraphPad Prism 8.0 (GraphPad Software, La Jolla, CA, USA) were used to perform the statistical analysis. Data are shown as mean ± SEM of the mean. Two-sided Student's t test was used to analyze the differences between groups. The differences of LINC00958 and HDGF expression levels between tumor and non-tumor specimens were evaluated by paired t test. Chi-square test was adopted to analyze the association of LINC00958 and METTL3 expression with clinicopathological features. Kaplan-Meier curve with log-rank test was used to compare the survival outcome, and Cox proportional hazards model was employed for multivariate survival analysis. Pearson's correlation was performed to analyze the correlation between LINC00958, miR-3619-5p, METTL3, and HDGF levels. P value less than 0.05 was considered statistically significant. Supplementary methods are described in Additional file 1. LINC00958 is highly expressed in HCC and predicts overall survival With a stringent filter of logFC > 2.0 and P value < 0.005, we established the profile of differentially expressed lncRNAs in HCC based on TCGA data. As demonstrated in Additional file 2: Figure S1A, the expression levels of 441 lncRNAs were significantly altered in HCC tumor samples. LINC00958 was suggested to be upregulated in HCC (logFC = 4.092782 and P value = 4.44 × 10 − 7 ). We then used the data retrieved from starBase platform and found a markedly higher expression of LINC00958 in HCC (Additional file 2: Figure S1B). To verify the bioinformatics results, we performed RT-qPCR to quantify the expression levels of LINC00958 in normal human liver cell line QSG-7701 and six HCC cell lines. As shown in Fig. 1a, LINC00958 was highly expressed in HCC cell lines. Furthermore, we examined LINC00958 expression in 80 paired HCC tissues and non-tumor specimens. LINC00958 was remarkably overexpressed in HCC tissues (Fig. 1b), especially in those with moderate/poor differentiation, microvascular invasion, and TNM III/IV stage ( Fig. 1c-e). FISH assay verified the overexpression of LINC00958 in HCC tissues compared to the nontumor samples (Fig. 1f). To investigate the clinical significance of LINC00958 in HCC, we classified the enrolled 80 patients into LINC00958 high and LINC00958 low groups based on the median expression value (2 −△△Ct = 0.134). As indicated in Additional file 3: Table S1, high LINC00958 expression was associated with tumor differentiation (P = 0.019), tumor size (P = 0.025), microvascular invasion (P = 0.014), and TNM stage (P = 0.013). Bioinformatics prediction implied a correlation between LINC00958 expression status and patient survival (P = 0.014; Additional file 2: Figure S1C), and we confirmed that patients with high LINC00958 expression had poorer overall survival than those with low LINC00958 expression (P = 0.003; Fig. 1g). Multivariate Cox regression analysis showed that LINC00958 was an independent prognostic factor for HCC patients [hazard ratio (HR) 2.153, 95% confidence interval (CI) 1.105-4.195, P = 0.024; Fig. 1h and Additional file 4: Table S2]. LINC00958 is required for malignant behaviors in HCC cells We performed RT-qPCR and FISH assays and showed that LINC00958 was predominantly located in cytoplasm ( Fig. 2a, b). To evaluate the function of LINC00958 in HCC, we stably knocked down the expression of LINC00958 by three shRNAs (sh1-LINC00958, sh2-LINC00958, and sh3-LINC00958) in HCCLM3 and Focus cells. As shown in Fig. 2c, sh2-LINC00958 exhibited the most evident knockdown effect and was chosen for the subsequent experiments. CCK-8 assays demonstrated that silencing LINC00958 significantly reduced the proliferative capabilities of HCCLM3 and Focus cells (Fig. 2d). The inhibitory effects of LINC00958 knockdown on HCC cell proliferation were further confirmed by colony formation and EdU assays (Fig. 2e, f). Transwell assays showed that HCC cells transfected with sh-LINC00958 presented a markedly decreased migration and invasion abilities (Fig. 2g). In addition, we overexpressed LINC00958 via lentivirus in Hep3B and HepG2 cells (Additional file 5: Figure S2A). Based on the results from CCK-8 assays, we observed increased cell growth rates in HCC cells with LINC00958 overexpression (Additional file 5: Figure S2B). We also performed colony formation and EdU assays and showed that LINC00958 overexpression significantly elevated the proliferation of Hep3B and HepG2 cells (Additional file 5: Figure S2C-D). LINC00958 overexpression greatly promoted cell migration and invasion abilities in HCC (Additional file 5: Figure S2E). Collectively, these data indicated that LINC00958 facilitates HCC proliferation and migration in vitro. Highly expressed LINC00958 is associated with clinicopathological characteristics and independently predicts overall survival. a Expression levels of LINC00958 in normal human liver cell line QSG-7701 and six HCC cell lines (Hep3B, Hep2, Huh7, MHCC-97H, Focus, and HCCLM3) were examined using RT-qPCR. b The expression levels of LINC00958 in 80 paired HCC tissues and non-tumor specimens were determined using RT-qPCR. c LINC00958 expression levels were detected in patients with well tumor differentiation and those with moderate/poor differentiation (n = 80). d Expression levels of LINC00958 were compared between patients without microvascular invasion (MVI) and those with MVI (n = 80). e LINC00958 expression levels were assessed in the TNM stage I/II group and the TNM III/IV group (n = 80). f FISH was used to determine the expression of LINC00958 in HCC tissue and the paired non-tumor sample. Statistical data are shown. g Kaplan-Meier survival curves showing the effect of LINC00958 on overall survival (P = 0.003) (n = 80). h Multivariate analyses of the independent predictive factors for overall survival. Hazard ratio (HR) and the corresponding 95% confidence interval (CI) are shown. *P < 0.05, **P < 0.01, ***P < 0.001 Fig. 2 Knockdown of LINC00958 inhibits HCC proliferation, migration, and invasion in vitro. a Levels of LINC00958 from the nuclear and cytoplasmic fractions of HCCLM3 and Focus cells were evaluated using RT-qPCR. GAPDH and U6 were used as positive control for the cytoplasmic and nuclear fraction, respectively. b FISH was performed to determine the subcellular distribution of LINC00958 in HCCLM3 cells. c The expression of LINC00958 was knocked down using three shRNAs in HCCLM3 and Focus cells. d CCK-8 assays were performed to assess the cell proliferation in LINC00958-silenced HCC cells. e Colony formation assays showed the clone numbers in HCC cells with LINC00958 knockdown. f EdU assays were performed to assess the proliferative ability of HCCLM3 and Focus cells with LINC00958 knockdown. g Transwell assays were conducted to examine the effects of LINC00958 knockdown on HCC cell migration and invasion. *P < 0.05, **P < 0.01, ***P < 0.001 LINC00958 targets miR-3619-5p to exert its tumorpromoting effects in HCC Given the cytoplasmic distribution of LINC00958, we hypothesized that LINC00958 might exert its effects via targeting miRNAs. As demonstrated in Fig. 3a, the results from RIP assays using an anti-AGO2 antibody showed that endogenous LINC00958 was preferentially enriched in the AGO2 IP pellet compared to control IgG IP pellet. To explore the underlying regulatory mechanism of LINC00958, we used starBase and miRDB databases and found six miR-NAs with potential complementary binding sequences ( Fig. 3b and Additional file 6: Supplementary Material 1). Fig. 3 LINC00958 targets miR-3619-5p to exert its tumor-promoting effects in HCC. a RIP assay for AGO2 was conducted to detect the levels of endogenous LINC00958 in the AGO2 IP pellet. b A total of six miRNAs were predicted to harbor complementary sequences to LINC00958 according to starBase and miRDB databases. c AGO2-RIP assays showed the enrichment of the predicted six miRNAs in Hep3B and HepG2 cells with LINC00958 overexpression or NC. d Enrichment of LINC00958 in HCCLM3 and Focus cells transfected with miR-3619-5p mimics or miRcontrol. e Putative binding sequence between LINC00958 and miR-3619-5p. f Luciferase reporters containing WT or MUT LINC00958 transcript as well as blank pmirGLO were co-transfected with miR-3619-5p mimics or miR-control in HCCLM3 and Focus cells. Luciferase activity was determined using dual luciferase reporter system. g Enrichment of LINC00958 pulled down by biotin-miR-3619-5p or negative control. h FISH results showing the colocalization of LINC00958 and miR-3619-5p in cytoplasm in HCC cells. **P < 0.01, ***P < 0.001 AGO2-RIP assays showed that miR-3619-5p was the highest enriched miRNA in the LINC00958-overexpressed group compared to the negative control (NC) group (Fig. 3c) and LINC00958 enrichment was much higher in the miR-3619-5p mimics group compared to the miR-NC group (Fig. 3d). These data suggested that LINC00958 and miR-3619-5p existed in the same RNA-induced silencing complex. Furthermore, we generated a mutant sequence of LINC00958 that could not bind miR-3619-5p for the subsequent luciferase reporter assays (Fig. 3e). As demonstrated in Fig. 3f, miR-3619-5p mimics significantly decreased the luciferase activity in HCC cells transfected with the wildtype LINC00958 sequence, whereas the luciferase activity was not obviously altered in HCC cells transfected with the mutant LINC00958. Subsequently, biotin-labeled miRNA pulldown assays showed significantly increased LINC00958 interaction in the HCC cells transfected with biotin-labeled miR-3619-5p compared to that in the control (Fig. 3g). FISH assays revealed the colocalization of LINC00958 and miR-3619-5p in the cytoplasm in HCC cells (Fig. 3h). HDGF, a direct target of miR-3619-5p, is crucial for the function of LINC00958 To investigate the targets of miR-3619-5p regulated by LINC00958, we performed bioinformatics analysis using four different algorithms including PicTar, TargetScan, miRDB, and RNA22 (Additional file 8: Supplementary Material 2). Figure 4a shows the overlapping target genes of miR-3619-5p. Expression analysis yielded that only hepatoma-derived growth factor (HDGF) was downregulated in HCCLM3 cells upon LINC00958 knockdown and upregulated in Hep3B cells upon LINC00958 overexpression (Fig. 4b). Furthermore, RNA sequencing was performed to identify the differentially expressed genes in LINC00958-silenced HCC cells. Using DEGseq method, we found that 429 genes were downregulated and 134 genes were upregulated after LINC00958 knockdown (Fig. 4c). Enriched KEGG pathway analysis on RNA sequencing data was shown in Fig. 4d. Notably, HDGF was found downregulated by 2.63-fold after LINC00958 knockdown. We then mutated the miR-3619-5p binding site of the 3′-UTR of HDGF to construct the pmirGLO-HDGF 3′-UTR-MUT vector (Fig. 4e). Luciferase reporter assays indicated that miR-3619-5p mimics significantly decreased the luciferase activity of pmirGLO-HDGF 3′-UTR-WT, but had no effect on the activity of pmirGLO-HDGF 3′-UTR-MUT (Fig. 4f). We then investigated the expression levels of HDGF in HCC cells transfected with miR-3619-5p mimics or inhibitor. In HCCLM3 cells, miR-3619-5p mimics led to a decreased expression level of HDGF. Compared with the control cells, Hep3B cells transfected with miR-3619-5p inhibitor presented a higher level of HDGF expression (Fig. 4g). As demonstrated in both TCGA data (Fig. 4h) and our RT-qPCR results (Fig. 4i), HDGF was remarkably upregulated in HCC tissues. Correlation analysis suggested a negative correlation between the level of miR-3619-5p and HDGF expression level in HCC tissue specimens (Fig. 4j). In addition, the expression level of LINC00958 was positively correlated with HDGF expression level (Fig. 4k). To confirm that HDGF was the downstream target of LINC00958-mediated HCC progression, we performed the subsequent functional rescue assays. We first overexpressed HDGF in HCCLM3 sh-LINC00958 cells and verified the overexpression efficiency by RT-qPCR and western blotting (Additional file 9: Figure S4A-B). Based on the CCK-8 and EdU assays, we found that HDGF overexpression could rescue the suppressed proliferative capability in HCCLM3 sh-LINC00958 cells (Additional file 9: Figure S4C-D). Transwell assays showed that the inhibited cell motility was restored after overexpressing HDGF in HCCLM3 sh-LINC00958 cells (Additional file 9: Figure S4E). Together, these data indicated the tumor-promoting role of the LINC00958/miR-3619-5p/HDGF axis in HCC. LINC00958 positively correlates with lipogenesis in HCC cells Mounting evidence indicates that abnormal lipid metabolism plays crucial parts in HCC development [22][23][24], and HDGF was recently reported to be a lipogenesis-associated gene in tumorigenesis [25]. We started to explore whether LINC00958 could affect lipogenesis in HCC cells. Pathway enrichment results also revealed that fatty acid metabolism was among the top canonical pathways (Fig. 4d). As shown in Fig. 5a, b, HCCLM3 and Focus cells with LINC00958 knockdown exhibited reduced cellular levels of cholesterol and triglyceride, whereas Hep3B and HepG2 cells overexpressing LINC00958 presented higher cholesterol and triglyceride levels compared with the control cells. HCCLM3 and Focus cells with LINC00958 knockdown showed decreased mRNA levels of several key enzymes in lipogenesis, including SREBP1, FASN, SCD1, and ACC1 (Fig. 5c). In contrast, Hep3B and HepG2 cells with LINC00958 overexpression showed increased SREBP1, FASN, SCD1, and ACC1 mRNA (Fig. 5d). Western blotting results confirmed that LIN00958 was positively correlated with the protein levels of SREBP1, FASN, SCD1, and ACC1 in HCC cells (Fig. 5e) (Fig. 5f). Furthermore, more lipid droplets were detected by Oil Red O staining in HCC patient samples with high LINC00958 level compared to those with low LINC 00958 expression level (Fig. 5g, h). To verify that LINC00958 promoted lipogenesis through miR-3619-5p, we performed the subsequent rescue experiments. As demonstrated in Additional file 10: Figure S5A-B, the elevated cholesterol and triglyceride levels in LINC00958-overexpressed Hep3B cells were ameliorated by miR-3619-5p mimics. High SREBP1 mRNA and protein levels in LINC00958-overexpressed Hep3B cells were counteracted by transfecting miR-3619-5p mimics (Additional file 10: Figure S5C-D). Oil Red O staining results suggested that miR-3619-5p overexpression restored the elevated lipid droplets in LINC00958-overexpressed Hep3B cells (Additional file 10: Figure S5E). Likewise, we investigated the effects of HDGF overexpression on lipogenesis in LINC00958silenced HCCLM3 cells. As shown in Additional file 11: Figure S6A-E, the inhibited cellular cholesterol and triglyceride levels, SREBP1 levels, and lipid droplet levels in LINC00958-silenced HCCLM3 cells were rescued by HDGF overexpression, suggesting that HDGF was the downstream effector of LINC00958-mediated lipogenesis. Together, these data supported that LINC00958 promoted lipogenesis in HCC through the miR-3619-5p/ HDGF signaling pathway. LINC00958 facilitates HCC growth in vivo PDX mouse models were utilized to investigate the effects of LINC00958 on HCC growth in vivo. We intratumorally injected recombinant sh-NC, sh-LINC00958, LINC00958, and NC on four groups of PDX mice, respectively (Fig. 6a). Clinical characterization of the donor patients is presented in Fig. 6b. We histopathologically analyzed the engrafted tumors using H&E staining (Fig. 6c). As indicated in Fig. 6d-f, we found that LINC00958 knockdown resulted in a blunted tumor growth in terms of tumor weight and volume, whereas LINC00958 overexpression accelerated tumor growth. We then determined the expression levels of LINC00958 in the four groups of PDX tumors by FISH and RT-qPCR. As shown in Fig. 6g, h, decreased LINC00958 levels were detected in the sh-LINC00958 group, whereas elevated LINC00958 levels were observed in the LINC00958 group. The expression level of HDGF was found downregulated in the sh-LINC00958 group and upregulated in the LINC00958 group (Fig. 6i). The expression levels of HDGF, SERBP1, and Ki67 in the PDX tumors were then examined by immunohistochemistry. As presented, downregulated HDGF, SERBP1, and Ki67 expression levels were detected in the sh-LINC00958 group, while upregulated levels of HDGF, SERBP1, and Ki67 were found in the LINC00958 group (Fig. 6j). m 6 A modification is associated with LINC00958 upregulation in HCC cells Recent advancements in tumor epigenetic regulation have shed light on the involvement of m 6 A modification in lncRNA [26,27]. We then wondered whether m 6 A was associated with LINC00958 upregulation in HCC. According to the results from an online bioinformatics database m6Avar [28], we found four RRACU m 6 A sequence motifs in the exon region (at ch11:13001568, 13002361, 13002410, and 13011005). m 6 A RIP-qPCR analysis showed that m 6 A was highly enriched within LINC00958 in Hep3B, HepG2, HCCLM3, and Focus cells (Fig. 7a). METTL3 is a crucial m 6 A methyltransferase and has been reported to be involved in HCC development [29]. RT-qPCR results indicated that METTL3 expression level was positively correlated with the level of LINC00958 in 50 HCC tissues (Additional file 12: Figure S7A). As indicated in Additional file 13: Table S3 and Additional file 12: Figure S7B, high METTL3 expression was associated with tumor differentiation (P = 0.002), (See figure on previous page.) Fig. 4 HDGF is a direct target gene of miR-3619-5p. a Venn diagram showing six putative miR-3619-5p target genes predicted by four different algorithms (PicTar, TargetScan, miRDB, and RNA22). b RT-qPCR analysis showed that HDGF was downregulated in LINC00958-silenced HCCLM3 cells and upregulated in LINC00958-overexpressed Hep3B cells. c Heat map showing the differentially expressed genes modulated by LINC00958 knockdown. d Enriched KEGG pathway analysis showing the most enriched pathways. e Putative binding sequence of miR-3619-5p in the 3′-UTR of HDGF. f Dual luciferase reporter assay revealed that miR-3619-5p could bind to the 3′-UTR of HDGF. g Expression levels of HDGF were detected by RT-qPCR in miR-3619-5p-overexpressed HCCLM3 cells or miR-3619-5p-silenced Hep3B cells. h TCGA data suggested that the expression level of HDGF was upregulated in liver cancer samples. i RT-qPCR results showing the expression levels of HDGF in 50 paired HCC tissues and non-tumor specimens. j Correlation analysis showing a negative correlation between miR-3619-5p and HDGF expression (P = 0.011). k Expression level of LINC00958 was positively correlated with HDGF expression level (P = 0.002) Figure S7C). To explore the effects of METTL3 on LINC00958 upregulation in HCC, we knocked down the expression of METTL3 using lentivirus in HCCLM3 and Focus cells. As shown in Fig. 7b, c, RT-qPCR and western blotting assays verified the knockdown efficiency. Compared to the control group, the m 6 A level of LINC00958 was lower in Fig. 7 m 6 A modification is associated with LINC00958 upregulation in HCC cells. a m 6 A RIP-qPCR analysis showed that m 6 A was highly enriched within LINC00958 in HCC cells. b RT-qPCR was performed to verify the knockdown efficiency of METTL3 in HCCLM3 and Focus cells. c Western blotting was used to confirm the knockdown efficiency of METTL3 in HCCLM3 and Focus cells. d The m 6 A level of LINC00958 was examined in HCCLM3 and Focus cells with METTL3 knockdown. e The expression level of LINC00958 was assessed in HCCLM3 and Focus cells with METTL3 knockdown. f HCCLM3 cells with METTL3 knockdown were treated with actinomycin D for the indicated time points, and the expression level of LINC00958 was examined using RT-qPCR. g Focus cells with METTL3 knockdown were treated with actinomycin D for the indicated time points, and the expression level of LINC00958 was examined using RT-qPCR. h HCCLM3 and Focus cells were treated with or without 5-aza-dC, and LINC00958 expression level was examined using RT-qPCR. i HCCLM3 and Focus cells were treated with specific inhibitors of HDAC1 (PCI-24781), HDAC3 (RGFP966), HDAC6 (ACY-1215), and broad-spectrum HDAC inhibitor (SAHA). RT-qPCR was performed to examine the expression levels of LINC00958. j Schematic diagram demonstrating the molecular mechanisms underlying LINC00958 in HCC METTL3-silenced HCC cells (Fig. 7d). We found that METTL3 downregulation was associated with decreased LINC00958 expression level (Fig. 7e). We then treated HCC cells with actinomycin D to block transcription and found that METTL3 knockdown significantly decreased the half-life of LINC00958 in HCCLM3 and Focus cells (Fig. 7f, g). These data suggested that METTL3-mediated m 6 A is associated with the upregulation of LINC00958 in HCC, probably by regulating the stability of its transcript. In addition, we investigated whether DNA methylation or histone modification could affect LINC00958 in HCC, whereas no significant results were observed (Fig. 7h, i). A graphic illustration of the tumor-promoting role of LINC00958 in HCC is depicted in Fig. 7j. Characteristics of PLGA-PEG(si-LINC00958) NPs To investigate the potential utility of LINC00958 as a therapeutic target for HCC, we developed a novel PLGA-based nanoplatform encapsulating si-LINC00958. The double emulsion solvent diffusion method was used to prepare the PEGylated PLGA NPs loaded with si-LINC00958 and spermidine, named as PLGA-PEG(si-LINC00958) NPs hereafter. PEGylation of PLGA improves the stability of NPs in physiological environment by decreasing their interactions with serum proteins [30]. Spermidine can neutralize the charge of the anionic siRNA, turning it less hydrophilic and more likely to be encapsulated into hydrophobic PLGA [16]. As shown in representative TEM image (Fig. 8a), PLGA-PEG(si-LINC00958) NPs were spherical in shape and presented narrow size distributions. DLS were used to measure their size and zeta potential. The average diameter of PLGA-PEG(si-LINC00958) NPs was 170.49 ± 4.45 nm, with a PDI of 0.15 ± 0.01 ( Fig. 8b and Additional file 14: Figure S8A). The zeta potential was − 4.85 ± 0.02 mV (Additional file 14: Figure S8B). A negative surface charge has been previously reported to be optimal to achieve a long-lasting in vivo circulation time [31]. The encapsulation efficiency of the PLGA-PEG(si-LINC00958) NPs was 40.8 ± 1.1%. We then evaluated the in vitro release behavior of si-LINC00958 from PLGA-PEG(si-LINC00958) NPs (Additional file 14: Figure S8C). In the first 24 h, the cumulative release amount of siRNA from free si-LINC00958 and PLGA-PEG(si-LINC00958) NPs was 80.3% and 42.7%, respectively. As time went by, the si-LINC00958 entrapped in NPs was gradually released, and the sustained release continued for approximately 1 week. The data suggested that PLGA-PEG(si-LINC00958) NPs provided controlled release. The cellular uptake of NPs into HCCLM3 cells was examined by incubating the cells with Courmarin-6 labeled NPs. Fluorescence microscopy revealed that Courmarin-6 NPs exhibited strong fluorescent signal inside the cells compared to the control (Fig. 8c), indicating that NPs increased the cellular drug uptake. Then, we evaluated the targeting properties of NPs in vivo. Free 1,1′-dioctadecyl-3,3,3′,3′-tetramethylindotricarbocyanine iodide (DiR) or DiR-NP was intravenously injected via the tail vein. The fluorescent intensity of the DiR-NP group in tumor was significantly stronger than that of the free DiR group (Additional file 15: Figure S9A-B). The data demonstrated the in vivo tumor-targeting capacity of the NPs. As shown in Additional file 14: Figure S8D, we verified the knockdown efficiency of PLGA-PEG(si-LINC00958) NPs in HCCLM3 cells. We then performed CCK-8 assays to assess the in vitro antitumor capability of PLGA-PEG(si-LINC00958) NPs. The results showed that PLGA-PEG(si-LINC00958) NPs could decrease the proliferative ability of HCC cells (Additional file 14: Figure S8E). Therapeutic efficacy and toxicity evaluation of systemic injection of PLGA-PEG(si-LINC00958) NPs in a PDX model of HCC We treated the HCC PDX model by injecting PLGA-PEG(siRNA control) NPs or PLGA-PEG(si-LINC00958) NPs via the tail vein. Tumor growth was significantly inhibited following treatment with PLGA-PEG(si-LINC00958) NPs compared with PLGA-PEG(siRNA control) NPs (Fig. 8d). The xenograft tumor weight and volume were markedly reduced in mice injected with PLGA-PEG(si-LINC00958) NPs (Fig. 8e, f). RT-qPCR results confirmed the consistent knockdown of LINC00958 in xenografts derived from mice treated with PLGA-PEG(si-LINC00958) NPs (Fig. 8g). Downregulated expression levels of HDGF and SREBP1 in xenograft tumors were verified by western blotting (Fig. 8h). Survival analysis showed that systemic administration of PLGA-PEG(si-LINC00958) NPs remarkably prolonged the mouse overall survival (Fig. 8i). We evaluated systemic toxicity by H&E staining and showed that intravenous administration of PLGA-PEG(si-LINC00958) NPs exhibited no significant toxicity to major organs including the liver, kidney, lung, spleen, and heart (Fig. 8j). In addition, we performed blood index analyses of ALT, AST, Cr, and BUN and confirmed the absence of significant hepatotoxicity and renotoxicity (Fig. 8k-n). Discussion LncRNAs have been established as crucial regulators in pathogenesis, especially in malignancies [32]. In the present study, we used TCGA data to determine a landscape of differentially expressed lncRNAs in HCC, which revealed a significant upregulation of LINC00958 in HCC. We then confirmed that LINC00958 was highly expressed in HCC by RT-qPCR and FISH assays. High LINC00958 level was correlated with multiple malignant clinicopathological characteristics and was an independent predictor for unfavorable survival outcome. By loss- of-function and gain-of-function experiments, we demonstrated that LINC00958 promoted the proliferation, migration, and invasion of HCC in vitro. PDX models have emerged as invaluable preclinical models for cancer research [33]. We adopted PDX models and verified the tumor-promoting role of LINC00958 in vivo. Sequestration of miRNAs is the most frequently reported mechanism by which lncRNAs exert their regulatory function. Given the cytoplastic distribution of LINC00958 in HCC, we wondered whether LINC00958 could serve as a miRNA sponge. We screened six miRNAs overlapped by two different bioinformatics databases and verified the binding between LINC00958 and miR-3619-5p using RIP, dual luciferase reporter, RNA pull-down, and FISH assays. Further functional experiments showed that LINC00958 sponged miR-3619-5p to promote HCC progression. Previous studies indicated that miR-3619-5p inhibits cell proliferation and migration in HCC [34]. miR-3619-5p is involved in LINC00202-mediated retinoblastoma progression through targeting the expression of an oncogene RIN1 [35]. miR-3619-5p has also been demonstrated to exert a tumor-suppressive role in several types of malignancies, including bladder cancer [36], lung cancer [37], prostate cancer [38], and cutaneous squamous cell carcinoma [39]. To investigate the target gene of the LINC00958/ miR-3619-5p pathway in HCC, we combined four bioinformatics algorithms and RNA sequencing results and found that HDGF was the downstream effector of the LINC00958/miR-3619-5p axis. HDGF has been established as an oncogene that facilitates the progression of HCC [40,41]. One recent study suggested that HDGF could affect lipid metabolism via SREBP1 in HCC [25]. Our results revealed that LINC00958 facilitated lipogenesis via the miR-3619-5p/HDGF pathway. LINC00958 increased cellular cholesterol and triglyceride levels and contributed to lipid droplet formation. Key enzymes in lipogenesis, including SREBP1, FASN, SCD1, and ACC1, were also affected by LINC00958. As one of the hallmarks of cancer, metabolic alteration plays an indispensable role in cancer. However, only a few studies focused on the involvement of lncRNAs in HCC lipid metabolic reprogramming. Our data have provided novel insights into the lipogenesis-modulating role of LINC00958 in HCC. Recent years have witnessed remarkable advancements of m 6 A modification in regulating all stages of the RNA life cycle. The deposition of m 6 A is encoded by "writers" that catalyze m 6 A formation (such as METTL3, METTL14, and WTAP), "erasers" that selectively remove the m 6 A code (such as FTO and ALKBH5), and "readers" that decode m 6 A methylation (such as YTH domain proteins and IGF2BP) [42]. m 6 A has been demonstrated to affect the targeted mRNA or miRNA and participate in the progression of various cancers [43]. However, studies on m 6 A modification in lncRNAs are scarce in the field of cancer. Recently, Wu et al. demonstrated that m 6 A modification upregulates lncRNA RP11 by increasing its nuclear accumulation [27]. m 6 A was suggested to be highly enriched on lncRNA FAM225A and can increase its RNA stability [26]. Herein, we revealed that m 6 A methylation was enriched within LINC00958 in HCC cells using both in silico data and m 6 A RIP experiment. Moreover, METTL3 regulated the m 6 A modification in LINC00958, thus affecting its RNA stability. These results suggested that elevation of LINC00958 in HCC may be attributed to the m 6 A modification. Targeting delivery of siRNA using NPs has been recognized as practical and promising for cancer nanotherapy. Liposomes and viral vectors have been implicated to be potential vehicles for siRNA delivery, but they may induce toxicity and cannot maintain sustained release of siRNAs [16]. Approved by the Food and Drug Administration, PLGA is biodegradable and non-toxic and provides high stability, prolonged blood circulation time, and sustained release profile [44]. PLGA has gained substantial attention among the various polymers developed for formulation of nanoplatform and has been used for (See figure on previous page.) Fig. 8 Characterization and therapeutic efficacy of PLGA-PEG(si-LINC00958) NPs in HCC PDX model. a Representative TEM image of PLGA-PEG(si-LINC00958) NPs. b The size distribution profile of PLGA-PEG(si-LINC00958) NPs. c Cellular uptake of NPs into HCCLM3 cells was evaluated by incubating HCCLM3 cells with Courmarin-6 labeled NPs. d HCC PDX model was used to demonstrate the therapeutic efficacy of PLGA-PEG(si-LINC00958) NPs via the tail vein injection. The harvested xenografted are shown. e Tumor weight was compared between PLGA-PEG(si-LINC00958) NPs and PLGA-PEG(siRNA control) NPs. f Tumor volume was compared between PLGA-PEG(si-LINC00958) NPs and PLGA-PEG(siRNA control) NPs. g RT-qPCR was performed to compare the expression level of LINC00958 between PLGA-PEG(si-LINC00958) NPs and PLGA-PEG(siRNA control) NPs. h Western blotting was performed to compare the expression level of LINC00958 between PLGA-PEG(si-LINC00958) NPs and PLGA-PEG(siRNA control) NPs. i Kaplan-Meier curves were plotted to compare the overall survival between the mice injected with PLGA-PEG(si-LINC00958) NPs and those injected with PLGA-PEG(siRNA control) NPs (P = 0.003). j Representative H&E staining of major organs including the liver, kidney, lung, spleen, and heart at the end of the experiment. k Blood ALT levels between the mice injected with PLGA-PEG(si-LINC00958) NPs and those injected with PLGA-PEG(siRNA control) NPs. l Blood AST levels between the mice injected with PLGA-PEG(si-LINC00958) NPs and those injected with PLGA-PEG(siRNA control) NPs. m Blood Cr levels between the mice injected with PLGA-PEG(si-LINC00958) NPs and those injected with PLGA-PEG(siRNA control) NPs. n BUN levels between the mice injected with PLGA-PEG(si-LINC00958) NPs and those injected with PLGA-PEG(siRNA control) NPs. ***P < 0.001 siRNA delivery. Byeon et al. used PLGA-based NPs incorporating FAK siRNA for overcoming chemoresistance in ovarian cancer [45]. PLGA NPs loaded with siRNA against osteopontin have been demonstrated to be effective for mammary carcinoma systemic treatment [46]. PEGylated NPs are regarded as "stealth NPs" and characterized by increased circulation time in vivo and tumor uptake. The surface shielding with PEG avoids plasma protein adsorption, protects NPs from the immune recognition, and increases bioavailability [18]. In this study, we developed and characterized a PEGylated PLGA nanoplatform loaded with LINC00958 siRNA for HCC therapy. PLGA-based nanosystem ensured the controlled release of si-LINC00958 and protected it from premature degradation. According to the results from cellular uptake experiments, NPs exhibited enhanced uptake into the tumor cells, which may facilitate the accumulation of NPs in the tumor. Biodistribution of NPs by systemic administration showed accumulated NPs in the xenograft tumor sites as well as the liver. The enhanced permeability and retention (EPR) effect is based on the leaky vasculature and poor lymphatic drainage present in the tumor. NPs > 100 nm can avoid being engulfed by the mononuclear phagocyte system and excreted by the kidney, while NPs < 400 nm preferentially accumulate in tumor sites and exhibit an optimal EPR effect [47]. Taking advantage of the EPR effect, PLGA-PEG(si-LINC00958) NPs achieved high concentration in HCC xenografts in vivo. Since the liver is the primary organ responsible for drug biotransformation, many NP-based drug delivery systems present substantial amounts of NPs in the liver [48,49]. The results from PDX models demonstrated that this nanodrug system prominently reduced tumor burden. Compared with the control group, hampered tumor growth was observed in the PLGA-PEG(si-LINC00958) NP group. In addition, the results from H&E histopathological analysis and blood biochemical examination confirmed no significant toxic side effects. Conclusions In summary, we comprehensively investigated the functional roles, molecular mechanisms, and clinical applications of LINC00958 in HCC. Our results revealed that LINC00958 was upregulated in HCC cell lines and tissues. High LINC00958 expression level was an independent prognostic factor for overall survival in HCC patients. We showed that LINC00958 promoted HCC cell proliferation, migration, invasion, and lipogenesis through the miR-3619-5p/HDGF axis. Moreover, PDX models were employed to confirm the effects of LINC00958 on HCC growth in vivo. We demonstrated that m 6 A modification was responsible for the upregulation of LINC00958 in HCC. For potential clinical application, we developed a novel nanoplatform encapsulating LINC00958 siRNA for HCC systemic treatment. Our study revealed that LINC00958 plays a crucial part in HCC lipogenesis and progression and highlighted its value as a prognostic predictor and nanotherapeutic candidate in HCC. assays were performed to analyze the effects of HDGF overexpression on Hep3B cells with LINC00958 knockdown. The data are shown as the mean ± SEM. **P < 0.01 vs. the sh-NC group. (E) Transwell assays were conducted to evaluate the effects of HDGF overexpression on the migration and invasion of LINC00958-silenced Hep3B cells. The data are shown as the mean ± SEM. **P < 0.01, ***P < 0.001 vs. the sh-NC group. Additional file 10: Figure S5. LINC00958 promotes lipogenesis through miR-3619-5p. (A) Effects of miR-3619-5p overexpression on cholesterol level in LINC00958-overexpressed Hep3B cells. The data are shown as the mean ± SEM. ***P < 0.001 vs. the NC group. (B) Effects of miR-3619-5p overexpression on triglyceride level in LINC00958-overexpressed Hep3B cells. The data are shown as the mean ± SEM. ***P < 0.001 vs. the NC group. (C) RT-qPCR assays were used to examine the effects of miR-3619-5p overexpression on SREBP1 level in LINC00958-overexpressed Hep3B cells. The data are shown as the mean ± SEM. ***P < 0.001 vs. the NC group.
9,667
2020-01-08T00:00:00.000
[ "Biology", "Medicine" ]
Scaling of self-stimulated spin echoes Self-stimulated echoes have recently been reported in the high cooperativity and inhomogeneous coupling regime of spin ensembles with superconducting resonators. In this work, we study their relative amplitudes using echo-silencing made possible by a fast frequency tunable resonator. The highly anisotropic spin linewidth of Er$^{3+}$ electron spins in the CaWO$_4$ crystal also allows to study the dependence on spin-resonator ensemble cooperativity. It is demonstrated that self-stimulated echoes primarily result from a combination of two large control pulses and the echo preceding it. The ensemble coupling of spins with a common resonator mode is quantified by the cooperativity C = 4g 2 ens /Γκ tot , where g ens = g 0 √ N , g 0 the single spinphoton coupling strength, Γ the inhomogeneous spin linewidth, κ tot the total loss rate of the resonator and N the number of spins.When C ≪ 1, emitted echo fields are dissipated from the resonator before they could interact with spin ensemble again.On the other hand, when C ≫ 1, a strong collective feedback effect of the emitted field on the spins, e.g.super-radiance [28] and radiation damping [29] can dominate the spin-dynamics.The intermediate regime of optimal impedance matching C = 1 is especially relevant for maximum efficiency quantum memories [30].It is the purpose of this paper to experimentally study the scaling of self-stimulated echoes in these different regimes.Our study is, in particular, aided by the use a fast frequency tunable superconducting resonator [31] for controlled emission of radiation into the resonator [32]. Generation of SSEs can be understood using a simplified phase evolution in time, as proposed in Ref. brings the spins to different points on the Bloch sphere, which for simplicity can be decomposed into a subset of ground state (g) and excited state (e) amplitudes.A second control pulse at a time τ bifurcates the previous spin amplitudes into four subsets causing a refocusing at the time 2τ between two evolution trajectories, i.e. a conventional two-pulse Hahn echo.The emitted Hahn echo then itself acts like a pulse on spins such that new branches of spin evolution appear and additional refocusing events occur at a time 3τ .Subsequent echoes create more bifurcations and more refocusing events separated by τ .We start our experimental studies by qualitatively verifying the sketch of Figure 1(a) which, in particular, illustrates that formation of SSEs requires phase evolution from all the pulses and echoes preceding it.Echo-trains measured using two control pulses of the same amplitude and phase are shown in Fig. 1(b).Note that the magnitude is plotted in the logarithmic scale.We observe that all subsequent echoes are suppressed when we detune the resonator frequency by an amount ∆ω ≫ κ to suppress the emission of echo2 (top panel) [32].Applying the same duration detuning pulses between the echoes (dashed curves) produces no change thus proving that the detuning pulses do not generate significant phase noise to cause a suppression of echoes.Same observation of subsequent echo suppression is made when echo3 (bottom panel) is silenced.These suggest that contribution of 2-pulse refocusing to SSE, e.g. from pulse1 and echo1 in echo3, is small.In the following, we expand on the preceding observations and semi-quantitatively study the relative amplitudes of SSEs using in-situ control of radiation fields in the resonator and spin-resonator cooperativity. Our electron spins (with effective S = 1/2) are provided by bulk doped Er 3+ substitutional ions in a CaWO 4 crystal with a nominal concentration of 50 ppm.The crystal is held with vacuum grease on a superconducting resonator of frequency ω 0 /2π = 6.5 GHz operating in the overcoupled regime with a loss rate of κ c /2π = 1.9 ± 0.1 MHz.The bulk distribution of Er 3+ and narrow inductor width of 1 µm naturally result in extremely inhomogeneous Rabi angles and benefit the formation of SSE.Two additional properties are relevant to this study.Firstly the kinetic inductance of the superconducting resonators (film thickness = 50 nm, inductor width = 0.5 µm) made from NbN allows the resonance frequency to be rapidly tuned by passing a bias current through the inuctor strip of the resonator [31].Secondly, it is possible to access different cooperativity C in the same setup.This is because two isotopes of Er 3+ , one without a nuclear spin I = 0 (77%) and the rest with I = 7/2 [33] couple with different number of spins at different transitions.Moreover, fine tuning of C is facilitated by the spin linewidth varying with the direction of the applied magnetic field as the highly anisotropic gyromagnetic tensor (γ ab = 117 MHz/mT, γ c = 17 MHz/mT) [34] responds to the charge-noise from crystal defects [35,36].The experiments are performed at the base temperature of a dilution refrigerator at 20 mK, with the magnetic field aligned with the c-axis (ϕ ∼ 0) unless mentioned explicitly.More details of the experimental setup can be found in Ref. 32. The resonator tunability helps to control the back action of the echo field on the spins, that is to vary spin rotations during the echo emission, and study the amplitudes of subsequent SSE.As shown in the sketch of Fig. 2 are applied and the resonator detuned for 20 µs, a time longer than the echo duration, with varying ∆ω around echo1. Figure 2(b) shows the corresponding echo train traces (acquired at a large demodulation bandwidth of 100 MHz to account for the relatively large total loss rate κ tot ∼ 7 MHz) near the I = 0 transition with C = 3 (see further below).The variation of echo1 magnitude versus normalized resonator detuning −∆ω/κ tot is plotted in Fig. 2(c) and the observed decay is well accounted for by the resonator filtering function (κ tot /2)/ ∆ω 2 + κ 2 tot /4 [32].Similar to Fig. 1(b), subsequent echoes, echo2 and echo3 are progressively suppressed.To quantify their relative suppression, we plot the amplitude of echo2 and echo3 as a function of echo1 and echo2, respectively, in Fig. 2(d).A linear dependence (proportionality constant 0.16 and 0.12, respectively) describes the echo2 and echo3 data well. Full quantitative understanding of the scaling of SSE is challenging due to the lack of knowledge of exact spin frequency detuning and coupling strength distribution.Here, we use a minimalist model to explain the scaling of echo2 and echo3 using classical Bloch theory.Three pulses with arbitrary flip angles β i produce a STE with an amplitude proportional to sin(β 1 ) sin(β 2 ) sin(β 3 ) [1], where we assume pulse delay τ ≪ T 2 , T 1 .Using control pulses of the same Rabi angle β and the fact that resulting echo1 fields are relatively much smaller, the resulting spin rotations from back action is sin(θ 1 ) ≈ θ 1 .Then the STE contribution of echo2 is equal to θ 1 sin 2 (β), where θ i denotes the much smaller rotation angle from echo back action.Similarly, the 2-pulse Hahn echo contribution of echo2 (from pulse2 and echo1) is proportional to θ 2 1 sin(β).The latter is smaller in magnitude than the STE contribution as long as β ≫ θ 1 .Thus linear scaling of echo2 with echo1 can be established.Similar arguments can be made for echo3 to show that the dominating contribution comes from a 3-pulse STE from pulse1, pulse2 and echo2, with a resulting echo3 proportional to θ 2 sin 2 (β).The proportionality constant extracted from slopes in two cases is found to be similar, 0.16 and 0.12, as expected from the model.Overall our observations in Fig. 2(d) suggest that a SSE primarily consists of a 3-pulse STE from two large control pulses and weak echo field preceding it.Barring common prefactors, we can thus quantify the magnitude of the (i + 1) th echo in the limit of τ ≪ T 1 as where i > 0 is a positive integer, and a scaling factor η 2 captures the fraction of power transferred to spins during the formation of an echo.To verify this equation further, we acquired SSE traces by varying the flip angle of the first control pulse β 1 (Fig. 2(e)), while keeping β 2 fixed.Their decay is plotted in Fig. 2(f).It has been previously shown that for strongly inhomogeneous Rabi angles in spin systems coupled to small mode volume resonators, spins for which pulse amplitudes amount to π/2 and π contribute maximum to the Hahn echo [36][37][38].This allows us to set β 2 = 90 • , and proportionally vary β 1 using the ratio of pulse amplitudes β 2 /β 1 .The SSE decays calculated from Eq. 1 are plotted as solid lines in Fig. 2(f), and show an excellent agreement with measurements using the same scaling parameter η = 0.21 across the entire dataset.Moreover, η sin 2 (β) ≈ 0.21 is close to the measured slope in Fig. 2(d) acquired under the same experimental conditions.We now study the dependence of SSE amplitudes on spin ensemble-resonator cooperativity.To this end, we identify two transitions in the spectrum [Fig.3 (m I = 7/2 is the nuclear spin projection on the magnetic field axis).From fits performed to κ tot = κ + g 2 ens Γ/(∆ω 2 s + Γ 2 /4) [18], we find coupling strengths g ens /2π = 10 ± 1 MHz and 1.2 ± 0.1 MHz, and spin linewidths Γ/2π = 76 ± 5 MHz and 15 ± 1 MHz, and corresponding cooperativity C = 3 and 0.2, respectively.Here ∆ω s is the magnetic field dependent detuning of the spin transition frequency from the resonator.The difference in number of spins is consistent with the isotope and seven sub-level ground state populations in the I = 7/2 manifold.Echo response measured using the same control pulses (2 µs in duration, such that pulse bandwidth ≪ Γ) at two transitions [Fig.3(b)] shows strongly suppressed or absent SSE for the case of C ≪ 1, and supports similar observations made in Ref. 4. To investigate differences of spin dynamics between two Er isotopes, the spin-relaxation time is measured using an inversion recovery sequence (Fig. 3(c)).For I = 7/2, we observe an exponential recovery with a decay constant T 1 = 440 ± 11 ms, a value consistent with a direct phonon process [32,39].In contrast, we observe a bi-exponential recovery for I = 0, with decay constants T fast 1 = 4.7 ± 0.6 ms and T slow 1 = 97 ± 12 ms.However, neither of the two values is compatible with a direct-phonon process (scaling as 1/B 5 ), suggesting that a combination of strong collective radiative effects, i.e. superradiance and spatial spin diffusion across the low mode volume resonator [22] could be responsible.The role of incoherent radiation from enhanced spin relaxation towards formation of SSEs can, however, be ruled out as resonator detuning pulses of duration 20 µs applied in-between the echoes (dashed curves in Fig. 1(b)) do not alter the subsequent echoes.We also measure spin coherence times T 2 at two transitions and find the contrasting SSE amplitudes to not be related to the relative T 2 times.In fact, T 2 = 2.5 ms for I = 7/2 is four times longer compared to that for I = 0 and possibly limited by instantaneous diffusion [40,41]. Another control of cooperativity is achieved by different Γ of the spin ensemble obtained when rotating the applied magnetic field with respect to the c-axis of the crystal.Figure 3(d) shows measured magnetic field B res at which the I = 0 transition is resonant with the resonator (left axis) and the extracted spin linewidth Γ (right axis).The B res positions agree with the spin Hamiltonian of Er 3+ with a reasonable misalignment angle of 2.5 • from the true c-axis.We observed a change in Γ from 30 MHz at ϕ = 2.5 • to 210 MHz at ϕ = 21 • .Similar observations have been made previously [35,36] and attributed to a combination of local electric fields from charge defects, charge compensation and lack of inversion symmetry at the substitutional Ca 2+ sites.On the other hand, the extracted ensemble coupling strength g ens decreases by only 10% in this ϕ range.The small variation in g ens is consistent with g 0 calculated from the anisotropic gyromagnetic tensor, and the fact that the same number of spins are coupled to the resonator due to the bulk distribution of Er 3+ . For SSE measurements, we choose slightly off-resonant B fields at different ϕ to achieve a maximum echo amplitude [41] and somewhat similar κ tot values (between 2.8 MHz and 4 MHz) for better comparison.The SSE magnitudes measured with the same control pulses and delay τ = 25 µs are plotted as a function of echo number in Fig. 3(e) for different cooperativity C. We note that the off-resonant C is extracted by comparing the intra-cavity field measured at a repetition rate γ rep ≪ T 1 (spins saturated) with that taken at γ rep ≫ T 1 (spins polarized) [12,14].For all values of C, we observe an exponential decay of echo amplitudes, similar to Fig. 2(e) and Ref. 4. For extracting η using Eq. 1, once again we set β 1,2 = 90 • to select the spins that maximally contribute to SSE amplitudes.The calculated SSE decays and corresponding η for different C are plotted in Fig. 3(e, f).Interestingly, the scaling parameter η increases with C in an apparent linear fashion.In contrast, η = 0.21 extracted in Fig. 2(f) at a larger C = 3 is smaller than η = 0.3 for C = 1.5 in Fig. 3(e), suggesting the role of larger κ tot in the smaller spin rotation during echoes, crudeness of the model and a more complex dependence of η on C. In conclusion, we have used control of intra-cavity field, in particular through echo-silencing, and cooperativity tuning to study scaling of self-stimulated echoes in a strongly inhomogeneously coupled spin ensemble to a small mode volume superconducting resonator.Our results demonstrate that the amplitude of a self-stimulated echo primarily arises from a three pulse stimulated echo using two large control pulses and the preceding echo field.Further studies will target a larger range of C, especially at a fixed κ tot , to map out the scaling and decay of SSE amplitudes against C. STE and SSE in combination with phase imprinting [32,42] could also be used to implement selective in situ magnetic resonance techniques such as diffusion spectroscopy and imaging [43]. We acknowledge the support from the UK Department for Science, Innovation and Technology through the UK national quantum technologies program.S.D.G. acknowledges support by the Engineering and Physical Sciences Research Council (EPSRC) (Grant Number EP/W027526/1).The Chalmers group acknowledges the support from the Swedish Research Council (VR) (Grant Agreements No. 2019-05480 and No. 2020-04393), EU H2020 European Microkelvin Platform (Grant Agreement No. 824109), and from Knut and Alice Wallenberg Foundation via the Wallenberg centre for Quantum Technology (WACQT).This work was performed in part at Myfab Chalmers. FIG. 1 . FIG.1.Self-stimulated spin echoes (SSE).(a) A schematic of refocusing mechanism leading to self-stimulated echoes at 3τ, 4τ, 5τ ... as originally described in Ref. 3. (b) Measured magnitude of echo trains using two pulses of same amplitude, duration 2 µs and phase, and τ = 25 µs.Two panels compare cases when the resonator is detuned to selectively suppress echo2 (top) or echo3 (bottom) emissions.Dashed curves in top (bottom) panels correspond to cases when the same resonator detuning pulse is applied between echo1 (echo2) and echo2 (echo3).Note that the dashed lines lie almost entirely on top of the solid lines, i.e. detuning in-between echoes has no effect.Measurements are done at C = 3. FIG. 2 . FIG. 2. SSE response versus intra-cavity field.(a) An experimental sequence consisting of two control pulses of flip angles βi, and duration of 2 µs with a 20 µs long resonator detuning pulse across echo1 of varying ∆ω.(b) SSE traces at different ∆ω and same flip angles β = β1 = β2.Larger noise floor is because of the larger measurement bandwidth BW ∼ 100 MHz compared to other plots acquired at BW of 2 MHz.(c) Measured (symbols) and theoretical (curve) echo amplitude against different resonator detunings.(d) Scaling of echo{2, 3} amplitudes (measured: symbols, fits: lines) with corresponding changes in echo{1,2}.(e) SSE traces for different flip angles β1 of the first control pulse and fixed β2.The sequence is shown in the inset.(f) SSE magnitude decay for different β1 versus echo number.Solid lines are calculated from Eq. 1.For all plots C = 3. 2 FIG. 3 . FIG. 3. SSE response versus spin-resonator cooperativity.(a) Continuous wave spectroscopy near two Er 3+ transitions I = 0 and I = 7/2, mI = 7/2 at zero angle (measured: symbols, fit:lines).(b) Spin energy relaxation (measured: symbols, fit:lines) using inversion recovery sequences for the two transitions at zero angle.(c) echo response using the control pulses with same power.(d) Spin resonance position (left axis, measured: symbols, theory: line) and spin linewidth (right axis) for the I = 0 transition extracted from continuous wave spectroscopy.The magnetic field angle ϕ is relative to the c-axis of CaWO4.(e) Decay of SSE magnitudes for different cooperativity, but similar κtot, obtained at different ϕ for the I = 0 transition.Solid lines are calculated using Eq. 1. (f) scaling of the extracted scaling parameter η as a function of C. Dashed line is a guide to the eye.
4,027.8
2023-09-20T00:00:00.000
[ "Physics" ]
Semi-supervised Learning for the BioNLP Gene Regulation Network Background The BioNLP Gene Regulation Task has attracted a diverse collection of submissions showcasing state-of-the-art systems. However, a principal challenge remains in obtaining a significant amount of recall. We argue that this is an important quality for Information Extraction tasks in this field. We propose a semi-supervised framework, leveraging a large corpus of unannotated data available to us. In this framework, the annotated data is used to find plausible candidates for positive data points, which are included in the machine learning process. As this is a method principally designed for gaining recall, we further explore additional methods to improve precision on top of this. These are: weighted regularisation in the SVM framework, and filtering out unlabelled examples based on a probabilistic rule-finding method. The latter method also allows us to add candidates for negatives from unlabelled data, a method not viable in the unfiltered approach. Results We replicate one of the original participant systems, and modify it to incorporate our methods. This allows us to test the extent of our proposed methods by applying them to the GRN task data. We find a considerable improvement in recall compared to the baseline system. We also investigate the evaluation metrics and find several mechanisms explaining a bias towards precision. Furthermore, these findings uncover an intricate precision-recall interaction, depriving recall of its habitual immediacy seen in traditional machine learning set-ups. Conclusion Our contributions are twofold: 1. An exploration of a novel semi-supervised pipeline. We have succeeded in employing additional knowledge through adding unannotated data points, while responding to the inherent noise of this method by imposing an automated, rule-based pre-selection step. 2. A thorough analysis of the evaluation procedure in the Gene Regulation Shared Task. We have performed an in depth inquiry of the Slot Error Rate, responding to arguments that lead to some design choices of this task. We have furthermore uncovered complexities in the interplay of precision and recall that negate the customary behaviour commonplace to the machine learning engineer. Background The set of BioNLP shared tasks [1] form a biannual challenge used by many to apply and develop state-ofthe-art methods in the field of biomedical information extraction (IE). In 2013 in its third instalment, it again succeeded in attracting a considerable amount of contributions from an international community of researchers. This work is spread over six different subtasks, each with a focus on fine-grained IE to construct knowledge bases in their respective domain. The Gene Regulation Network subtask [2] tries to attain the construction of a relation network encompassing the extracted knowledge, in order to build models to represent the behaviour of a system. This network can then serve as a base for representing current knowledge, and be leveraged for making inferences and predictions, i.e. towards experiment design. In the case of this particular task, this system entails the whole of molecular interactions between genes and proteins in a specific bacterium, the bacillus subtilis. An example sentence for this task is given in Figure 1. Participants are asked to extract a regulation network from sentences taken from PubMed abstracts describing these phenomena. This network is comprised of six different types of relations, which are related into a small hierarchy (see Figure 2). At both train and test time, gold standard annotations of entities are provided, making this a pure relation extraction task, without the need to do named entity recognition, a task with its own set of difficulties and challenges. Of further note is the fact that submissions are evaluated on the produced network as a whole, namely the set of relations detected on the test data as a whole. We discuss the impact of this global scoring in the section Results and discussion. In the systems produced for this task, we notice a strong tendency to favour precision, i.e. controlling the false positive rate. The top submission [3] obtained a precision score of 68%, however only reaching a recall of 34%. While there certainly is a need for reliable results when working with biomedical knowledge, covering a sufficient proportion of true positives (i.e. recall) can be equally fundamental in many practical applications. Examples of these are hypothesis generation and knowledge base construction, especially in settings where adding more data can not solve the problem of finding additional true positives (as can be the case in e.g. texts describing recent findings). Indeed, the interest in developing systems for inference and/or prediction equally lies in the retrieval of a sizeable hypothesis set, rather than reaching only those that can be found with high confidence. One way to balance a system in favour of recall is the exploitation of additional unannotated data. By working in a semi-supervised fashion, a learner can be made more aware of the wide variety of patterns encoding a relationship. This happens at the cost of introducing more noise (and hence decreasing precision), since there is no reliable way of labelling the extra data. In this paper we explore a method to decrease this cost, effectively keeping precision stable while improving recall. Basing ourselves on the model of [4], that achieved a second place for this task, we explore how semi-supervised techniques can improve the performance that this system obtains in its supervised form. We further investigate several techniques to counterbalance the noise added by these methods. Next to the traditional measure of weighing regularisation parameters, we go on to develop a novel method based on probabilistic rule-finding. Next, we look at the experimental set-up and compare the results of the proposed methods. We also discuss some of the properties of this task, and evaluate how these can impact performance in terms of precision and recall. This influence can be both direct, e.g. because of data skewness or pre-scoring processing, and indirect. An example of the latter is found in the choice of the final scoring metric (the Slot Error Rate), altering some of the parameter choices when designing and selecting a model. The section thereafter reviews related work. We finish with conclusions and future research questions. Baseline model We base ourselves on the model of [4]. The main reasons for this are as follows: • Their model came in second place, showing decent performance; • Unlike the winning entry, their model does not use hand-crafted rules, and is based on Support Vector Machines. Their set-up therefore lends itself perfectly to extension into a semi-supervised framework as described below. The main configuration of the system of [4] is a collection of Support Vector Machines (SVMs, see [5]), one per relation type. The authors construct a data point for each couple of genic entities in a sentence, effectively considering all potential agent/target pairs for the relations. The kernel used is a Gaussian RBF kernel (see [6] for the seminal work, and [7] for a good overview). The novelty of [4] lies in the feature construction. The feature vectors for candidate relation tuples are built as follows: This is a concatenation (symbolised by © ) of local features f base , complemented by what is referred to as context features, f context . The local features consist of the classical word information (stem + part-of-speech) along with the biomedical compound type (e.g. Gene, Protein) for the words that the entities comprise, with different parts for the agent and target entities. The context part is then constructed in the following fashion, also separately for both entities: with w being the words of the entities at hand, and the sum going over all the w i words in the sentence. d(w, w i ) is the distance in number of words between w and w i . This is in essence an average of the vectors encoding the different non-entity words in the sentence, weighted inversely by their distance to the entity words. a is a constant controlling how fast the weights decay with distance, and Z is a normalisation factor. Note that the traditional fashion of including textual context exists of concatenating these separate word vectors instead of averaging. This leads to feature vectors with only values of 0 or 1 as components, whereas the entries in f context can take on all real values in the interval [0,1]. We direct the reader to the work of [4] for further details. A few specific differences are to be noted between our implementation and that of the submitted system. We use the LibSVM [8] package as provided by the Scikit-learn Toolbox [9]; this difference in library used should be of minor influence on results, and we are indeed able to replicate their performance. Furthermore, as mentioned in the original paper, the distance d used for the submitted results was taken to be the distance in the parse tree, where later tests proved to be more favourable towards using a 'flat' sentence distance, as described above. We compared both options in a cross-validation setting (utilising trees generated from the parser by [10]) and found indeed the use of the latter to give better results. We use a value of a = 0.9. A distant learning approach The main issue of a fully supervised system is the difficulty to generalise towards unseen patterns. This problem is more apparent the sparser the data, and the richer the representation. With our baseline system having an elaborate feature representation, we suspect this to be a big factor in this framework. Furthermore, new data points will likely entail unseen words, in part counterbalancing the effectiveness of this sort of feature scheme, albeit widely used in NLP situations (as shown in e.g. [11] and [12]). Because of these reasons, the base system is likely to suffer from a poor generalisability, as also testified by its poor recall score. A corpus of related, but unseen data points can provide a source of new patterns to incorporate in our learner. Of course, the main obstacle is the lack of labelling for this data; we have no knowledge what points are to be marked as positive. Instrumental in any semi-supervised framework are therefore: • An approximation method to identify the labelling of unseen data. As this can never fully substitute the precision of annotations supplied by a human expert, the uncertainty in this introduces additional noise. Hence also the need for the next item: • Means of managing the uncertainty in adding unlabelled data. Since the labellings now contain more noise, this inherently changes the optimal learning strategy; a semi-supervised method needs to take this into account. We propose an expansion to the distant supervision framework (see [13,14]). In this line of methods, the classifier is trained on a set of 'bags' of data points, with the defining property that positive bags are only known to be partly containing positively labelled points. The negative bags on the other hand are more certain to effectively contain only negative points. As shown in [14,15], one use case for this set-up is exactly relation learning, in the event of having a set of known relations between two entities, but when no finer-grained annotations (i.e. on a document or sentence level) are available. Contrary to this framework, we do have at our disposal the fine-grained annotations of our labelled data set. However, the structure of these distant learning problems points us to the aforementioned approximation method to add unlabelled data to the training data. Namely, the following observation is used: if a biological relation exists between two entities (as seen in the labelled data), there is a substantial probability that another (unlabelled) sentence containing both entities will also encode this relation. We therefore add any data point from the unannotated corpus that is composed of two such entities to the training set, labelling it as positive. Note that, since our main goal is to introduce new patterns to the classifier, we also use the vocabulary from these sentences when constructing feature vectors. This ensures that we use an unbiased representation of these data points. Opposite to the case of positive examples, the same inference can not be performed here to extract negative data points. Absence from a sparse set of known relations only marginally changes the probabilities on these points. We therefore refrain from adding negatives from the unlabelled data, barring further methods to obtain a more accurate selection. This is where our case differs from most distant supervision systems, who are able to extract negative data points due to either explicitly providing negative seed examples, or having ample data to employ a closed world assumption [16]. The latter presumes an adequate coverage of positive data, such that everything outside of this knowledge is seen as negative. As will be seen, the pre-selection filter we develop in the following subsection provides us with an alternative method to extract negatives; there we will revisit our choice. We will refer to the above method as the 'basic' method (cfr. in results Table 1 the entry [BASIC]), as opposed to the systems augmented with the techniques described below. Methods of counterbalancing the added noise Whenever reliability of labelling is affected, this directly influences precision. The basic method proposed above is guaranteed to introduce new patterns to the classifier, which is expected to improve recall. However, this comes at the cost of adding uncertainty to the labelling of the data, which is prone to an increase in false positives. In this part, we will look at different methods to counter this effect and maintain adequate precision. We study the effects of a general method known to deal with different kinds of noise, namely having a non-constant regularisation parameter in the SVM. We then move on to develop a method of pre-selecting the data that is added from the unlabelled corpus, leading to a more fine-grained control of the introduced uncertainty. Weighted regularisation A conventional way to deal with noisy training examples comes with the observation that, in the traditional setup, only the positive data points are plagued by this noise. Hence, in a soft-margin SVM framework (as developed by [17]), a different regularisation policy is introduced for positive and negative examples, as first proposed by [18], and later also employed by e.g. [15,19]. Let χ + , χ − be the set of positive and negative data points respectively, and j(x) be the feature representation for x, this then leads to the following optimisation formulation: w is the weight vector that defines the separating hyperplane together with the constant b as a bias term. The ξ x serve in this optimisation problem as slack variables, allowing a trade-off of maximising the margin against having a few points surpassing that margin. By having two regularisation constants C + and C − we can allow the margin for positive points to be 'softer', accounting for the additional uncertainty in this subset. An automatic rule-detection algorithm for pre-selection of unannotated data Many machine learning systems that serve a specific application make use of a framework that incorporates specialist knowledge. A prevalent mechanism for this is by having some rule-based pre-/post-processing. We propose a method for extracting some of this knowledge from the labelled data in a fully automated fashion. This mechanism covers many standard techniques regularly used by system engineers, such as filtering on trigger words that explicitly refer to interactions ('transcription', 'binding', ...) [16,3], or on the type of bio-molecule for specific roles (e.g. the target of a Binding event is a Gene or Site entity) [2,3]. However, the automatic nature of our method discards the need for manually identifying and pinpointing useful rules. Furthermore, it is agnostic of the nature of the data, and hence perfectly adaptable to texts in any domain or task. In the framework of our semi-supervised system, this can then be used to obtain a more fine-grained selection from our unlabelled corpus. We do this by extracting patterns from the features of the labelled training data, and including from the unannotated data only those points that also adhere to these observed patterns. As we are dealing with a pre-selection step on what is expected to be positive, our main focus is on detecting sufficient conditions in the feature space for negativity. In order to find such a rule implicitly present in the data, we observe the following: where f i is the ith feature of a data point, Vi a set of values, and 0,1 have been used as shorthand for the (negative resp. positive) labelling of that point. The extension towards rules that conjoin several features is immediate. While the above observation is necessary for a negative labelling, it is by no means sufficient, i.e. finding a zero frequency can not exclude chance, especially in small datasets. To see how much of a factor f i effectively is in the labelling of the point, one could look at probabilistic measures such as Mutual Information, Bayes Factor or the Kullback-Leibler divergence. However, most of these measures are only meaningful on nonzero probabilities, mainly because of the occurrence of logarithms or divisions of these probabilities. To escape the ill-behaved nature in this situation, we look at the probability mass P(f i ∈ V i | 0), and demand it to be above a certain threshold. This avoids the confusion of rarely occurring feature values with rules, since this significantly lowers the probability that all mass ends up with negative points by chance. In the algorithm we construct below, we select good features to extract rules from, as well as combinations of two feature dimensions. While it is feasible to explore the use of even more features simultaneously in a rule, we abstain from doing so to preserve the balance between exhaustiveness and system performance. The steps to efficiently find these rules are as follows: 1 if Count(f i ∈ V i , 1) = 0 then 8: Add rule (f i ∈ V i 1) to R 9: end if 10: end if 11: end for 12: for all i; j ∈ T do 13: if Count(f i ∈ V i ; fj ∈ V j , 1) = 0 and Count(f i ∈ V i ; f j ∈ V j , 0) > threshold then 14: Add rule (f i ∈ i ∧ f j ∈ V j 0) to R 15: end if 16: end for A few things to note: • As many of our features can take any real value in the interval [0,1], bins are constructed to re-establish a binary nature, i.e. membership of V i is analogous to f i = 0 in the case of bi-valued features. Respectively, V i designates f i = 1. • For the sake of legibility, we implicitly assume V i , V j to be the 'right' bins. In reality, membership to both V i andV i , respectively V j andV j are checked. • Because P(f i ∈ V i | 0) = P (f i ∈ V i , 0)/P(0) and P(0) is a constant for a given training set, it is more efficient to work with joint probabilities. , Count(f j ∈ V j , 0)), we can already eliminate many combinations of feature dimensions to consider; this is the function of the set T. In our experiments, this reduces the number of combinations to check from 3.7 million to 30,000 and keeps the above algorithm tractable. Important to note is that this algorithm now gives us a tool to also select for negative examples in a distant supervision-like fashion. The basic selection criterion adapted from this general framework relies on the augmented probability of having a positive label, given that the relation exists in the labelled data. As argued before, a similar reasoning generally does not hold for negatives, rendering selection for them infeasible. However, the rules extracted by the above algorithm can serve not only to eliminate very unlikely candidates for positive labelling, as previously done. In fact, because these rules try to encode sufficient conditions for negativity, we can also employ them to distinguish a subset of all the other unlabelled data as being very likely negative. This offers us the opportunity to add both positive and negative points from our unannotated corpus, a technique not feasible in the basic distant learning framework. The threshold effectively decides the amount of rules extracted from the labelled data, and can be seen as an additional hyperparameter in the model. Based on our dataset, we found a threshold of 20 -30% of the size of the (annotated) data set to give the most balanced results in terms of precision vs. recall. Depending on the application requirements, a lower threshold will improve precision, while a higher threshold would have us expect an improvement in recall. Subject and data The Gene Regulation Network Task tries to accomplish detection of relations overarching a diverse set of molecular interactions. Specifically, six different types of relations are to be extracted: inhibition, activation, requirement, binding, transcription and regulation. The training and development set consists of 134 sentences, jointly encoding 230 interactions. On average this amounts to 38 examples per relation type. Considering the specialised language and grammar often used in scientific publications, the amount of training data seems rather sparse to learn a good general representation in such a complex output space. As previously argued, this is the main motivation for including additional data for use in the methods described above. We therefore augment the dataset we have with all sentences from PubMed abstracts responding to the query for "bacillus subtilis sporulation" (as accessed on 16/08/2013). Beginning from the annotated data points, we add a sentence from those unannotated texts if it contains at least two entities that also occur in our annotated data. Without these entities, a sentence could indeed never encompass a candidate data point for a relation. As such, from the initial 14,109 sentences, only 1,859 are retained, resulting in 11,778 possible entity pairs. Although of minor influence on the end result, we also leave out sentences that are already in the training set. In Table 2 we have shown the average amount of data points that effectively got added to the training set for each system. The Slot Error Rate From the predictions, a network gets constructed with the entities as the nodes and the relations between them as arcs. This network is then used for measuring performance: it gets compared to the reference by means of the Slot Error Rate (SER). This measure is defined by [20] as: with: • S the number of substitutions, i.e. edges that are predicted, but with the wrong type; • I the number of insertions (false positives); • D the number of deletions (false negatives); • N the number of arcs in the reference network. For the following analysis, we further define • C the number of correctly predicted relations; • M the number of arcs in the prediction. With this notation, precision and recall can be written as: The main motivation of [20] in proposing this error measure is the observation that F 1 , the often-used harmonic mean of precision and recall, can be seen to be: This derivation leads to believe that substitutions get overweighted in the use of this scoring mechanism. While by no means questioning the usefulness of the separate components (precision and recall), the SER gets proposed as a more balanced way of combining them as a means to compare systems. The devil is however in the details; or rather, the denominator. While it is true that S gets a bigger weight in the numerator, one has to account for the weighting of the different components in the denominator, since N + M = 2(C + S) + D + I where we use that N = C + S + D and M = C + S + I. A similar weight scheme can hence be seen in the denominator as well, softening the argumentation against it. With a similar derivation, one finds: This insight shows us that in attempting to lower the weight for S, this error rate has become completely independent of this factor altogether (since N is a constant, given the reference network)! Furthermore, the unboundedness of this measure can be fully attributed to the number of insertions. This can explain the prevalence of conservative systems that this task has received: as can be seen from the official results, all but one submission have a very low number of arcs in their prediction, which could be attributable to pursuing a low I figure. Error measures: uses for comparison and model optimisation By this analysis, we wish by no means to imply that the SER is a bad scoring mechanism per se. This kind of word error rates is widely used in several research branches, and with good reason. However, as the name somewhat shows, these are situations where a more or less fixed number of slots need to be 'filled', such as (speech) phoneme recognition or named entity recognition. In our notation, this would be equivalent to M ∼ = N. If this constraint is taken into account, one can show that SER ∼ = 1.5(1 − F 1 ), which is exactly what [20] find in their comparative analysis of measures. In different settings however, where the above approximation is not sure to hold, the choice of SER implies an additional degree of freedom, of which the consequences are not evident to grasp. In this more general case, SER is seen to overly reward precision in a great part of the result space. This can even occur at the cost of recall, as will be shown below. We believe there is an interesting opportunity for further research and discussion on this matter. Interesting, more general analyses can be found in both [21] and [22]. In the light of this study however, we mainly wish to highlight the inherent bias towards precision this design choice entails. As we are investigating methods of obtaining recall, this is certainly a factor to take into account. Comparison of performance between different systems (intersystem performance) is not the only function of a measure. The same measures get generally used for intrasystem measurements as well: in the comparison of multiple incarnations of models, and more commonly, hyperparameter optimisation. In order to asses the behaviour of the latter under different performance measures, we consider an ideally automated setting of optimising, not unlike running a gradient descent/ascent algorithm. In contrast to the case of general convex optimisation however, there is no convergence to a unique optimum. Rather, we are limited by the boundary of our system's performance, generally known as the precision-recall curve: the maximum precision that can be obtained for any required recall. Hence, we are driven by the measure's gradient until that border is reached. As we can see in Figure 3, the gradient field of SER shows some interesting behaviour. In a substantial region of the recall-precision space, there is an enormous push towards increasing precision. In the region of precision below 50%, this even happens at the cost of maintaining recall. As a result, a system optimised for this measure will generally show good performance, but has little focus on improving recall. For comparison, the analogous field for the F 1 measure is shown in Figure 4, which displays a better balance between favouring recall or precision, based on which is most lacking. As previously argued, there are use cases where an adequate amount of recall is called for. With this in mind, we point out that F 1 is embedded in a larger family of F-measures: where b is a measure of the relative value to the end user of recall with respect to precision [23]. We obtain F 1 for the case of b = 1, meaning precision and recall are balanced in evaluation. This parameter b can be a great tool for the system or task designer to designate the proportion of importance he wishes to place on the precision/recall trade-off. If precision is to be targeted, a value of b < 1 will accomplish this, without having gradients go 'against the grain' of increasing both basic measures. Aggregation of predictions and impact on scoring A final concern is the aggregational processing that occurs before calculating the performance measures. In a traditional machine learning setup, scores are calculated in a local scope; meaning, every predicted point is compared to a ground truth, and from the numbers extracted for correct predictions, substitutions, insertions and deletions, the necessary proportions are calculated. In the GRN task [2], performance is measured in a global fashion, due to the processing on the solution set that takes place before calculating the score. This happens in two steps: • From the predicted classifications a network is built. All scoring is done with respect to this, implying that multiple classifications of a same relation get collapsed into one. • 'Resolution of redundant arcs': recall that the different types of relations are ordered into a taxonomy ( Figure 2). Before scoring, any relation between two entities that is less specific (i.e. higher up the tree) than another appearing in the set, is removed. We can see that this procedure renders the precisionrecall trade-off a lot more intricate than in a traditional machine learning setting. In a local scoring procedure, the number of true positives can never decrease by adding more predictions; this is the main logic behind Receiver Operating Characteristic (ROC) curves as monotonously non-decreasing functions. Analogously, in the recall-precision space, this ensures a non-increasing curve of attainable points. Furthermore, this curve spans the whole range of recall: a recall of 100% is always attainable with a precision of at least the ratio of positives in the test set, a worst case that corresponds with classifying all test points as positive (see [24] for a thorough analysis of this and a performance measure that ensues from this, the Area Under Precision Recall Curve (AUCPR)). These principles no longer hold when removing predictions prior to measuring; adding a more specific prediction to an existing true positive renders the latter as non-existent, and recall at the end of the precision-recall curve will be limited by the ratio of positives that have the most specific relation (the leaves of the hierarchical tree in Figure 2). This dynamic stands orthogonal to research on performance measures in a hierarchical setting (as in [25]), which is pursuing less level-dependence in assessing predictions. This demonstrates that attaining sufficient recall is a greater challenge than in a regular setting. Furthermore, by adding a layer of complexity, it convolutes multiple tools that are basic in systems engineering: error analysis, model selection and comparison. We therefore wish to advocate the addition of local, unprocessed evaluation figures in future instalments of this task. Experiments Results for our experiments can be seen in Table 1. Each system has seen its hyperparameters optimised separately by a grid search, 25-fold cross-validated over the training data. The basic method we propose is entry [BASIC] in this table. Even without any added noise-balancing measures, the distant learning framework can already showcase more than a doubling in recall compared to the original submission results of [4]. In light of the previous discussion, this demonstrates a manifest improvement in this dimension. Results for the probabilistic pre-selection approach we developed can be found in the entries under [PRE-SEL]. There we explore several possibilities. In the first (select POS, no NEG), we only include (and filter) positives from our unlabelled set, in the fashion of our basic method. The second entry, select POS, select NEG, also employs the found rules to further add negatives from the unannotated corpus. Both are able to display a further improvement in F1, while still maintaining a good recall-precision balance. Especially the application of the filter to add negatives (select POS, select NEG) warrants a substantial rise in F1 score through an additional improvement in precision compared to the model that only selects positives. To further evaluate the value of our pre-selection step, we also include the entry select POS, all NEG, which includes all negatives without filtering them. Compared to disregarding negatives from the unlabelled set altogether (select POS, no NEG), this can be seen as greatly improving precision, but at a sizeable cost to recall. This strengthens the contribution of our algorithm, as the entry select POS, select NEG shows an even greater increase in precision without overly damaging recall, leading to an impressive rise in F1. From Table 2, we can see that this model filters out less than 10% of the potential negatives; this is enough to alleviate the most disruptive noise. These results confirm the value of filtering the unlabelled data before presenting them to the learning algorithm. As this has been done here exclusively based on the limited amount of labelled data, leveraging additional knowledge in this step could generate even more significant gains. Weighted regularisation (entry [W_REG]), a method traditionally suggested to handle additional noise in semisupervised frameworks, also obtains a high recall for this test. This comes however at a very severe cost to its precision, compared to our newly developed solutions. This demonstrates the idiosyncratic nature of our methods as applied to this particular task with respect to mainline distant supervision methods, and further validates their contribution compared to utilising standard approaches. Deeper study is required on the impact of our methods, since the performance of a system greatly depends on e.g. the features it uses. It remains an open question what the impact is of these implementation choices, such as the feature representation used, data preprocessing, etc. in comparison to the higher-level model choice. We suspect that a more fine-grained encoding of sentence context could further contribute to the performance of any system in this field. Related work In information extraction −and relation extraction in particular− a major bottleneck is the lack of sufficient annotated examples. The manual labelling of enough training instances in order to build an accurate classifier is often prohibitively expensive. On the other hand, collecting a large quantity of unlabelled textual data is cheap. Thus, it is interesting to train the extraction system on a small annotated corpus and in some way improve the quality of the learned classification patterns by exploiting the unlabelled examples. This had lead to bootstrapping, semi-supervised and even unsupervised learning techniques. A good overview on semi-supervised learning, the framework in which this work is embedded, can be found in both [26] and [27]. The oldest methods regard self-training and co-training, where a classifier is trained iteratively. In self-training, examples from the pool of unlabelled instances are chosen in the next training step to which the current classifier assigns labels with most certainty. In co-training, examples are chosen in the next training step to which two or more current classifiers that possibly use an independent feature set assign labels with most certainty [28]. Such a set-up promotes that the newly introduced training examples have similar patterns as the originally labelled examples, so no radical new patterns are learned at least not in the first steps of the iteration. This approach also does not offer an answer to the danger that the obtained classification function drifts away from the real classification boundary. In a variant scenario, a generative probabilistic classifier is used (i.e., probabilities are not estimated directly, rather they are estimated indirectly by invoking Bayes' rule, e.g., a naive Bayes classification) for the training of the initial classifier based on the seed set of labelled examples. The Expectation Maximization (EM) algorithm is then used to train the classifier that learns both from the labelled and unlabelled examples [29], but the algorithm can easily get stuck in a local maximum. In so-called open domain information extraction, frequently occurring patterns that signal a relation between two entities are identified in a large set of unlabelled data [30,31]. These techniques are not well suited for the extraction of relations in the biomedical domain, especially when the detection of infrequent relations is targeted. Another line of research is the generation of additional features from the unlabelled data. One recent work is [32], building on the work of [33]. Those methods generally obtain state-of-the-art performance, but fail to improve on them significantly. The relation extraction models that we present in this paper are closest to the work of [15]. These authors find sentences in Web documents that contain two given entities. It is a priori known that these entities are involved in the sought relation. The selected sentences contain positive as well as negative examples of the sought relation. The negative examples for training the classifier are sentences in Web documents that contain two given entities for whom it is known that the sought relation does not hold between them [14,16] on the other hand approximate this by the equally often-used closed world assumption, which dictates that all relations are in the knowledge database. To cope with the noise in the set of positive examples, weighted regularization is used when training a SVM, as we do in this paper. Our experiments on texts from the biomedical literature show that this weighted regularization did not yield the best results for semi-supervised learning. We have pro- Conclusions We have explored the addition of unlabelled data to increase the recall of our system. However, the noisy nature of this data tends to affect precision negatively. We have designed a pipeline to autonomously counterbalance this effect, based on no additional external knowledge. A promising extension of this method would be to include specialised external knowledge, either injected directly into the feature representation, or in the process of attributing labels to unannotated data. This could prove to be a powerful technique in attaining a more precise overall system. Another interesting approach could be to construct a more extensive pipeline, using one of the more precision-bearing techniques to improve upon our proposed system. Promising methods in general information extraction make use of language models (e.g., probabilistic models of word distribution) trained on huge amounts of unlabelled examples in order to find valuable replacements of words in the relation patterns or to identify valuable correlated word features used in the classification [12,34,35]. Recent work in biomedical event extraction already touches upon such ideas [32]. This is a path we intend to further explore in future work. Another particularly interesting approach is showcased by [36], training a classifier jointly on both labelled and unlabelled data. A promising direction could be to apply similar methods to specialised language corpora, such as the biomedical texts explored in the BioNLP tasks. We argue for the importance of recall in any information extraction task, to serve as a driving force for automated knowledge collection. This study contributes to gaining a deeper insight in the different factors at play in the 2013 BioNLP GRN task with respect to measuring performance, and the interplay of precision and recall in particular. We hope this will spark further discussion and analysis of both task organisation and submitted systems, thus helping this shared task in driving forward the field of biomedical IE.
9,095.8
2015-06-23T00:00:00.000
[ "Computer Science", "Biology" ]
First case report of fulminant septic shock from meningococcemia associated with Cryptococcus neoformans coinfection in an immunocompetent patient The meningococcal disease manifestation associated with the presence of Cryptococcus neoformans is rare. There are no reports in the literature about these simultaneous infections in immunocompetent patients. The aim of the present study is to describe the first case of fulminant septic shock by Neisseira meningitidis associated with Cryptococcus neoformans coinfection in an immunocompetent patient. We describe a case of an immunocompetent 74-year-old Caucasian woman who presented with fulminant acute meningococcemia associated with cryptococcal meningitis, which progressed to worsening general condition and died of septic shock and multiple organ dysfunctions in less than 48 hours. This case report demonstrates the possibility of coinfections related to Neisseria meningitidis and Cryptococcus neoformans, even in immunocompetent patients, which represent a diagnostic challenge for clinicians, thus encouraging further studies for a better understanding. Introduction Meningococcemia is an infectious syndrome caused by gram negative diplococci, Neisseria meningitidis, a bacterium that is present in the nasopharynx of normal individuals. Meningococcal infection develops when the microorganism spreads from the nasopharyngeal mucosa and invades the bloodstream. Clinical manifestations of meningococcal disease vary, with some mild disease cases, but the most common manifestation is septic syndrome and/or meningitis [1,2]. Infectious meningitis is most often caused by bacteria or viruses. Fungal meningitis is rare, affecting individuals with immunodeficiencies, such as HIV/AIDS, transplanted, and using immunosuppression patients, however this disease is highly dangerous and requires rapid treatment to avoid sequelae [3]. Among the fungal meningitis types, cryptococcal meningitis caused by Cryptococcus neoformans is the most common, especially in patients with HIV/AIDS. After entry into the body, the fungus spreads through the bloodstream, reaching the lungs, kidneys, lymph nodes, skin, bones, prostate and ends up being introduced directly into the central nervous system, especially in the meninges. This type of infection is rare in people with a functional immune system and is considered an opportunistic fungus [3]. The immune response and pathogen virulence play an important role in the disease progression and may leads to severe sepsis and, consequently, septic shock, when not immediately treated or inadequately treated [2]. The manifestation of meningococcal disease associated with the presence of Cryptococcus neoformans is rare. The present study described the first case of Nesseria meningitidis fulminant septic shock associated with Criptococcus neoformans coinfection in an immunocompetent patient. Severe Meningococcal disease (MD) progresses rapidly to shock, multiple organ failure, and death within 24 hours if without urgent treatment. Non-specific symptoms such as fever, drowsiness, nausea and vomiting, irritability and poor appetite are present 4-6 hours after the disease onset. Non-specific sepsis signs, such as pain in the leg, cold hands and feet, and abnormal color, are also observed within 12 hours after disease onset. Classic ecchymotic patches resulting from rapidly developing meningococcal infection and neck pain or stiffness usually T appears after 12 hours. Unfortunately, most cases of MD are diagnosed after these late signs onset and it is quite common to find hospitalized patients with an incorrect initial diagnosis [4]. Coagulopathy associated with MD is frequent and usually multifactorial. There is an imbalance between coagulation and fibrinolysis and therefore, although formal coagulation tests may be significantly prolonged, there is a tendency for intravascular thrombosis [5]. The presence of meningococcal endotoxin in the blood generates a severe acute proinflammatory response. Cytokines stimulates the tissue factors release leading to the formation of thrombin and fibrin clots. Cytokines and thrombin inhibit tissue plasminogen activator by releasing the plasminogen activator inhibitor-1 (PAI-1), compromising the endogenous fibrinolytic route. Thrombin formation stimulates inflammatory pathways and further weakens the endogenous fibrinolytic system by activation of the thrombin activatable fibrinolysis inhibitor (TAFI). Activation of the endotoxin complement (mainly via alternative and mannose-binding pathways) leads to the accumulation of anaphylotoxins, such as C3a and C5a that induce endothelial injury [5]. Microthrombosis and endothelial dysfunction associated with the proinflammatory response reduces endothelial expression of thrombomodulin and endothelial protein C receptors, thereby compromising the activation of this protein, disabling fibrinolysis. The procoagulant and proinflammatory state associated with these changes produces endovascular injury, microvascular thrombosis, organ ischemia, and multisystem dysfunction [5]. With inhalation entry port, Cryptococcosis is a systemic mycosis caused by the Cryptococcus complex, currently with two species: Cryptococcus neoformans and Cryptococcus gattii. Both species appears as globular or oval yeast, 3-8 μm in diameter, with single or multiple budding, narrow neck, and surrounded by a characteristic mucopolysaccharides composed capsule [3]. Cryptococcus neoformans is cosmopolitan, occurring on various organic substrates, often associated with bird habitat, dry excreta rich in nitrogen sources such as urea and creatinine. Favorable conditions to the abundant growth of this yeast forms microfocus, noted mainly in urban centers and pigeon-related [6]. The home environment, particularly in domestic dust, can be positive, between 13% and 50% [7]. Meningoencephalitis is the most commonly diagnosed clinical form, occurring in more than 80% of cases, either in isolation or associated with pulmonary involvement. It most commonly presents as acute or subacute meningitis or meningoencephalitis, however, single or multiple focal lesions in the central nervous system (CNS), simulating neoplasias, associated or not with the meningeal condition, are observed in the immunocompetent host. This latest presentation has been linked to Cryptococcus gattii [8]. In immunocompetent patients the clinical picture, resulting from nervous system inflammation, is exuberant: meningeal signs (nausea, vomiting and stiff neck); signs of meningoencephalitis in one third of patients on admission (changes in consciousness; memory, language and cognition deficit); and involvement of cranial pairs (strabismus, diplopia, or facial paralysis (III, IV, VI and VII) [9]. Temporary or definitive visual impairment or amaurosis throughout the course and treatment reflects injury to the I cranial pair (ophthalmic). There is a great clinical pleomorphism in cryptococcal meningoencephalitis, and dementia may be the only disease manifestation. Physical examination may show meningeal irritation signs (Kerning, Brudzinski, Lasègue, neck stiffness and Lewinson), intracranial hypertension signs, such as papilledema, which usually corresponds to intracranial pressure > 350 mmHg. Other neurological signs, such as ataxia, sensory impairment and aphasia may be observed. Complications such as fungal ventriculitis, obstructive block hydrocephalus without meningitis, and cerebrospinal fluid (CSF) malabsorption hydrocephalus by meningitis are frequent [9]. Case Female, 74 years old, living in Londrina (PR), sought emergency care from a neighboring city of Joinville with nausea, vomiting, diarrhea and abdominal pain with 24 hours of evolution, being medicated and released with symptomatics. The patient evolved with consciousness level lowering, oligoanuria, hypotension, cold extremities and generalized ecchymotic spots (Fig. 1). She was then brought to the Emergency Hospital (day 0) in Joinville (SC), in 2019. Family members reported the patient had only systemic arterial hypertension, compensated with monotherapy. She was using losartan and omeprazole. They denied diabetes, alcoholism or other known comorbidities. At physical examination, the patient was unconscious in Glasgow 3, with inaudible blood pressure (BP), tachycardic and tachypneic, and signs of poor peripheral perfusion. Aggressive volume replacement, hemodynamic monitoring, vasoactive drug initiation and subsequent orotracheal intubation, invasive ventilatory support, and respiratory isolation were performed. Laboratory tests showed: Complete blood counts, WBC 30.9 × 109/μL (promyelocytes 1%, myelocytes 2%, metamielocytes 10%, bands 40%, segmented 39%, eosinophils 0%, lymphocytes 5% and monocytes 3%). RBC 3.98 × 1012/μL, hemoglobin 11.7 g/dL, hematocrit 37.4% and platelets counts 56.3 × 109/μL. Albumin 1. Cerebrospinal fluid (CSF) examination was cloudy, yellow in color, absent clot, with leucocyte count of 10.9 × 109/μL (polymorphonuclear 85%, lymphomononuclear 15%), red blood cells 8.16 × 109/μL, glucose 29 mg/dL, total proteins 389 mg/dL, chlorides 109 mEq/L, LDH 388 U/L. VDRL and BAAR were negative. Screening and bacterioscopy showed numerous neutrophils and some gram-negative diplococci. The investigation of cryptococcus with ink from China showed encapsulated yeast, and the fungal infection by Cryptococcus neoformans was confirmed by the research of cryptococcal antigen with latex agglutination test, however it was realized only in CSF. Sorological tests for hepatitis B and C, HIV, CMV and syphilis were negative (day 1). Urine and blood cultures showed no bacterial growth. Genogroup molecular testing (qPCR) for Neisseria meningitidis detected genogroup C in CSF and serum sample (day 3). After admission (day 0), ceftriaxone (2g every 12 hours), clindamycin (600mg every 12 hours) and dexamethasone were begun. After fungal infection identification, liposomal amphotericin B (600mg per days), fluconazole (400mg per days) were also begun. The patient evolved (day 2) with refractory septic shock, disseminated intravascular coagulation and progressive hemodynamic instability, without responsiveness to the proposed therapy. Emergency dialyses were attempted without clinical response. The patient evolved to multiple organ dysfunctions and died in less than 48 hours. Discussion The meningococcal disease manifestation associated with the presence of Cryptococcus neoformans is rare. There are no reports in the literature about these simultaneous infections in immunocompetent patients. Inflammatory response related to Neisseria meningitidis infection may progress to multiple organ dysfunction, particularly circulatory and renal failure, acute respiratory distress syndrome, and common findings in meningococcal sepsis [10]. Fulminant septic shock associated with meningococcemia causes coagulation disorders, with thrombus formation, and rapid evolution to disseminated intravascular coagulation. Meningococcemia is a lifethreatening medical emergency requiring immediate recognition and treatment with antimicrobials and, in some cases, even after appropriate treatment, progresses to death [2]. Meningococcemia usually presents with petechial or purpuric eruption, including mucous membranes, especially in the extremities, and may progress to disseminated eruptions and bruising related to meningococcal septic shock [2]. Cryptococcus neoformans is an opportunistic fungus usually associated with immunodeficiency, especially in patients with HIV who are using antiretroviral therapy irregularly. In addition, it may affect patients on prolonged use of corticosteroids, diabetics, Hodgkin's disease, systemic lupus erythematosus, lymphoproliferative diseases, transplantation, sarcoidosis, liver cirrhosis, alcoholism and during chemotherapy [11][12][13]. In our case, the patient exhibited superficial disseminated purpuric lesions with marked ecchymosis, rapidly evolved into a consuming syndrome, disseminated intravascular coagulation, respiratory failure, and circulatory and renal failure, besides the loss of consciousness, due to a fulminant picture of septic shock, due to meningococcemia, with multiple organ failure and death in less than 48 hours. We describe the first case of fulminant septic shock due to meningococcemia associated with the presence of Cryptococcus neoformans in CSF in an immunocompetent patient. It is noteworthy that the patient had no comorbidity reported in the literature that could lead to infection with Cryptococcus, which leads us to the hypothesis of susceptibility to opportunistic fungal infection associated with meningococcemia. Conclusion Meningococcal disease associated with Cryptococcus neoformans coinfection is rare. In this case, an immunocompetent patient had acute fulminant meningococcemia associated with neurocriptococcosis, which progressed with general condition worsening and died due to septic shock and multiple organ dysfunctions, in less than 48 hours. This case report highlighted the possibility of coinfections related to Neisseria meningitidis and Cryptococcus neoformans, even in immunocompetent patients, which represents a diagnostic challenge for clinicians, thus encouraging further studies for a better understanding. Declaration of competing interest There are none.
2,508.8
2019-10-03T00:00:00.000
[ "Medicine", "Biology" ]
Non ultracontractive heat kernel bounds by Lyapunov conditions Nash and Sobolev inequalities are known to be equivalent to ultracontractive properties of heat-like Markov semigroups, hence to uniform on-diagonal bounds on their kernel densities. In non ultracontractive settings, such bounds can not hold, and (necessarily weaker, non uniform) bounds on the semigroups can be derived by means of weighted Nash (or super-Poincar\'e) inequalities. The purpose of this note is to show how to check these weighted Nash inequalities in concrete examples, in a very simple and general manner. We also deduce off-diagonal bounds for the Markov kernels of the semigroups, refining E. B. Davies' original argument. The (Gagliardo-Nirenberg-) Nash inequality in R d states that for functions f on R d and is a powerful tool when studying smoothing properties of parabolic partial differential equations on R d . In a general way, let (P t ) t≥0 be a symmetric Markov semigroup on a space E, with Dirichlet form E and (finite or not) invariant measure µ. Then the Nash inequality for a positive parameter d, or more generally for an increasing convex function Φ, is equivalent, up to constants and under adequate hypotheses on Φ, to the ultracontractivity bound for instance n −1 (t) ≤ Ct −d/2 for 0 < t ≤ 1 in the case of (2). We refer in particular to [6] and [13] in this case, and in the general case of (3) to the seminal work [9] by T. Coulhon where the equivalence was first obtained; see also [3,Chap. 6]. Let us observe that Nash inequalities are adapted to smoothing properties of the semigroup for small times, but can also be useful for large times. This is in turn equivalent to uniform bounds on the kernel density of P t with respect to µ, in the sense that for µ-almost every x in E one can write P t f (x) = E f (y) p t (x, y) dµ(y) with p t (x, y) ≤ n −1 (t) (4) for µ⊗µ-almost every (x, y) in E×E. Observe finally that (3) is equivalent to its linearised form for a decreasing positive function b(u) related to Φ: this form was introduced by F.-Y. Wang [17] under the name of super-Poincaré inequality to characterize the generators L with empty essential spectrum. For certain b(u) it is equivalent to a logarithmic Sobolev inequality for µ, hence, to hypercontractivity only (and not ultracontractivity) of the semigroup. Moreover, relevant Gaussian off-diagonal bounds on the density p t (x, y) for x = y, such as p t (x, y) ≤ C(ε) t −d/2 e −d(x,y) 2 /(4t(1+ε)) , t > 0 for all ε > 0, have first been obtained by E. B. Davies [12] for the heat semigroup on a Riemannian manifold E, and by using a family of logarithmic Sobolev inequalities equivalent to (2). Such bounds have been turned optimal in subsequent works, possibly allowing for ε = 0 and the optimal numerical constant C when starting from the optimal so-called entropy-energy inequality, and extended to more general situations: see for instance [3,Sect. 7.2], [6] and [13] for a presentation of the strategy based on entropy-energy inequalities, and [5], [10,Sect. 2] and [11] and the references therein for a presentation of three other ways of deriving offdiagonal bounds from on-diagonal ones (namely based on an integrated maximum principle, finite propagation speed for the wave equation and a complex analysis argument). In the more general setting where the semigroup is not ultracontractive, then the uniform bound (4) cannot hold, but only (for instance on the diagonal) for a nonnegative function V . Such a bound is interesting since it provides information on the semigroup : for instance if V is in L 2 (µ), then it ensures that P t is Hilbert-Schmidt, and in particular has a discrete spectrum. It has been shown to be equivalent to a weighted super-Poincaré inequality as in [18], where sharp estimates on high-order eigenvalues are derived, and, as in [2], to a weighted Nash inequality The purpose of this note is twofold. First, to give simple and easy to check sufficient criteria on the generator of the semigroup for the weighted inequalities (6)-(7) to hold : for this, we use Lyapunov conditions, which have revealed an efficient tool to diverse functional inequalities (see [1] or [8] for instance) : we shall see how they allow to recover and extend examples considered in [2] and [18], in a straightforward way (see Example 8). Then, to derive offdiagonal bounds on the kernel density of the semigroup, which will necessarily be non uniform in our non ultracontractive setting. For this we refine Davies' original ideas of [12]: indeed, we combine his method with the (weighted) super-Poincaré inequalities derived in a first step, instead of the families of logarithmic Sobolev inequalities or entropy-energy inequalities used in the ultracontractive cases of [3], [6] and [12]- [13]; we shall see that the method recovers the optimal time dependence when written for (simpler) ultracontractive cases, and give new results in the non ultracontractive case (extending the scope of [2] and [18]). Instead, we could have first derived on-diagonal bounds, such as (5), and then use the general results mentionned above (in particular in [11]) and giving off-diagonal bounds from on-diagonal bounds; but we will see here that, once the inequality (6)- (7) has been derived, the off-diagonal bounds come without further assumptions nor much more effort than the on-diagonal ones. To make this note as short and focused on the method as possible, we shall only present in detail the situation where U is a C 2 function on R d with Hessian bounded by below, possibly by a negative constant, and such that e −U dx = 1. The differential operator L defined by Lf = ∆f − ∇U · ∇f for C 2 functions f on R d generates a Markov semigroup (P t ) t≥0 , defined for all t ≥ 0 by our assumption on the Hessian of U . It is symmetric in L 2 (µ) for the invariant measure dµ(x) = e −U (x) dx. We refer to [3,Chap. 3] for a detailed exposition of the background on Markov semigroups. Let us point out that the constants obtained in the statements do not depend on the lower bound of the Hessian of U , and that the method can be pursued in a more general setting, see Remarks 3, 7 and 11. We shall only seek upper bounds on the kernel, leaving lower bounds or bounds on the gradient aside (as done in the ultracontractive setting in [10, Sect. 2], [13] or [16] for instance). Let us finally observe that S. Boutayeb, T. Coulhon and A. Sikora [5, Th. 1.2.1] have most recently devised a general abstract framework, including a functional inequality equivalent to the more general bound p t (x, x) ≤ m(t, x) than (5), and to the corresponding off-diagonal bound. The derivation of simple practical criteria on the generator ensuring the validity of such (possibly optimal) bounds is an interesting issue, that should be considered elsewhere. Definition 1. Let ξ be a C 1 positive increasing function on (0, +∞) and φ be a continuous positive function on R d , with φ(x) → +∞ as |x| goes to +∞. We first state our general result: Proposition 2. In the notation of Definition 1, assume that there exists a ξ-Lyapunov function W with rate φ. Then there exist C and s 0 > 0 such that for any positive continuous for all smooth f and s ≤ s 0 . Here, for r > 0 Here and below a ∨ b stands for max{a, b}. Remark 3. Proposition 2 can be extended from the R d case to the case of a d-dimensional connected complete Riemannian manifold M . If M has a boundary ∂M we then have to suppose that ∂ n W ≤ 0 for the inward normal vector n to the boundary, namely that the vector ∇W is outcoming at the boundary. We also have to check that a local super-Poincaré inequality holds: this inequality can easily be obtained in the R d case by perturbation of the Lebesgue measure, as in the proof below; for a general manifold, it holds if the injectivity radius of M is positive or, with additional technical issues, if the Ricci curvature of M is bounded below (see [8] or [17] and the references therein). Corollary 4. Assume that there exist c, α and δ > 0 such that for all smooth f and s ≤ s 0 . Here The first hypothesis in (9) allows the computation of an explicit Lyapunov function W as in Definition 1, with an explicit map φ, hence ψ in (8); the second hypothesis is made here only to obtain an explicit map h in (8), and then the s −p dependence as in the super Poincaré inequality (10). Observe also from the proof that s 0 does not depend on β, and that the constant C obtained by tracking in the proof its dependence on the diverse parameters would certainly be far from being optimal, as it is always the case in Lyapunov condition arguments. A variant of the argument leads to a (weighted if α < 1) Poincaré inequality for the measure µ, see Remark 13 below. For α > 1 we can take γ = 0 in Corollary 4, obtaining a super-Poincaré inequality with the usual Dirichlet form and a weight V , and then non uniform off-diagonal bounds on the Markov kernel density of the associated semigroup: Theorem 5. Assume that ∆U − |∇U | 2 /2 is bounded from above, and that there exist c, δ > 0 and α > 1 such that for all x ∈ R d . Then for all t > 0 the Markov kernel of the semigroup (P t ) t≥0 admits a density p t (x, y) which satisfies the following bound : for all β ∈ R and ε > 0 there exists a constant C such that In particular, if δ ≤ α − 1 (as in Example 8 below), and for β = 0, we obtain p = d/2 as in the ultracontractive case of the heat semigroup. The bound on the kernel density derived in Theorem 5 will be proved for all t > 0, but with an extra e Ct factor, hence is relevant only for small times. For larger times it can be completed as follows : Remark 6. In the assumptions and notation of Theorem 5, the measure µ satisfies a Poincaré inequality; there exists a constant K > 0 and for all t 0 > 0 there exists C = C(t 0 ) such that for all t ≥ t 0 and almost every ( Our method can be extended from the case of diffusion semigroups to more general cases. For instance the weighted Nash inequalities can be derived for discrete valued reversible pure jump process as then a local super-Poincaré inequality can easily be obtained. Then we should suppose that the Lyapunov condition holds for ξ(w) = w and use [8,Lem. 2.12] instead of [7, Lem. 2.10] as in the proof below. Observe moreover that [6] has shown how to extend Davies' method [12] for off-diagonal bounds to non diffusive semigroups. Example 8. The measures with density Z −1 e −u(x) α for α > 0 and u C 1 convex such that e −u dx < +∞ satisfy the first hypothesis in (9) in Corollary 4 and Theorem 5. Indeed, by with K > 0 and for |x| large, so In particular for u(x) = (1 + |x| 2 ) 1/2 , the hypotheses of Theorem 5 hold for α > 1 and δ = α − 1 (and U = u α has a curvature bounded by below), thus recovering the on-diagonal bounds given in [18] (and [2] in dimension 1), and further giving the corresponding off-diagonal estimates. It was observed in [2] that in the limit case α = 1 the spectrum of −L does not only have a discrete part, so an on-diagonal bound such as p t (x, x) ≤ C(t)V (x) 2 with V in L 2 (µ) can not hold. For α > 2 the semigroup is known to be ultracontractive (see [14] or [3,Sect. 7.7] for instance), and adapting our method to this simpler case were V = 1 leads to the corresponding off-diagonal bounds, see Remark 12. A variant of the argument also gives a weighted Poincaré inequality for the measure µ, see Remark 13. Example 10. The measures with density Z −1 u(x) −(d+α) for α > 0 and u C 2 convex such that e −u dx < +∞ satisfy the first hypothesis in Corollary 9. Indeed, by (11), for any ε > 0, and for |x| large. In particular, choosing u(x) = (1 + |x| 2 ) 1/2 , the hypotheses of Corollary 9 hold with δ = 0 for the generalized Cauchy measures with density The rest of this note is devoted to the proofs of these statements and further remarks. Notation. If ν is a Borel measure, p ≥ 1 and r > 0 we let · p,ν be the L p (ν) norm and Proof of Proposition 2. We adapt the strategy of [8, Th. 2.8], writing for r > 0. By assumption on φ and W , and letting Φ(r) = inf |x|≥r φ(x), the latter term is by integration by parts (as in [7, Lem. 2.10]) and for r ≥ r 0 . Hence for all r ≥ r 0 for ω = 1 ∨ 1/ξ ′ (W ). Now for all r > 0 the Lebesgue measure on the centered ball of radius r satisfies the following super-Poincaré inequality : for all u > 0 and smooth g, where b(r, u) = c d (u −d/2 + r −d ) for a constant c d depending only on d. For r = 1 (say) this is a linearization of the (Gagliardo-Nirenberg-) Nash inequality for the Lebesgue measure on the unit ball; then the bound and the value of b(r, u) for any radius r follow by homogeneity. Let now s ≤ s 0 := 4/Φ(r 0 ) be given. Choosing r := ψ(4/s) and then u := k min{1/h(r), s/8}, we observe that r ≥ r 0 and u ≤ ks 0 /8, so Proof of Corollary 4. Let W ≥ 1 be a C 2 map on R d such that W (x) = e a|x| α for |x| large, where a > 0 is to be fixed later on. Then for |x| large, by direct computation. Now for |x| ≥ (2c 2 ) 1/α by the first assumption in (9), and for a < (2cα) −1 , so for such an a there exists a constant C > 0 for which for |x| large. Proof of Theorem 5. It combines and adapts ideas from [2] and [12], replacing the family of logarithmic Sobolev inequalities used in [12] by the super-Poincaré inequalities (10). The positive constants C may differ from line to line, but will depend only on U and V . The proof goes in the following several steps. 1. Let f be given in L 2 (µ). With no loss of generality we assume that f is non negative and C 2 with compact support, and satisfies f V dµ = 1. Let also ρ > 0 be given and ψ be a C 2 bounded map on R d such that |∇ψ| ≤ 1 and |∆ψ| ≤ ρ (the formal argument would consist in letting ψ(x) = x · n for a unit vector n in R d ). For a real number a we also let ϕ(x) = e aψ(x) . We finally let F (t) = ϕ −1 P t (ϕf ). Evolution of by integration by parts. Indeed, following the proof of [2, Cor. 3.1], two integrations by parts on the centered ball B r with radius r > 0 ensure that for the inward unit normal vector n. Then a lower bound λ ∈ R on the Hessian matrix of U yields the commutation property and bound by our assumption on f and ϕ; moreover, on the sphere |x| = r, both In turn this term goes to 0 as r goes to infinity since for instance for all |x| = r large enough, by assumption on ∇U . Hence both boundary terms go to 0 as r goes to infinity; this proves (16). Then the map z(t) = e −2a 2 t y(t) satisfies It follows by integration that . This last bound holds as long as y(t) ≥ c(p + 1)u p 0 e 2Kt , and then for all t provided we take a possibly larger constant C (still depending only on U and V ) in it and in the definition of K. 1.3. In other words, for such a function f : for all t > 0, where c(t) 2 = C t p e 2Kt , K = C + (ρ + |β|)|a| + a 2 . 2. Duality argument. Let ϕ be defined as in step 1. By homogeneity and the bound |P t (ϕf )| ≤ P t (ϕ|f |), it follows from step 1 that for all t > 0 and all continuous function f with compact support. Let now t > 0 be fixed, Q defined by Qf = ϕ −1 P t (ϕf ) and W = c(t)V . Then The bound also holds for −a, so for ϕ −1 instead of ϕ, so for ϕP t (ϕ −1 f ) = Q * f where Q * is the dual of Q in L 2 (µ): for all f and W 2 µ-almost every x, hence (Lebesgue) almost every x since W ≥ 1 and µ has positive density. Observing that P t f = ϕ Q(ϕ −1 f ) and Q(g) = W R(W −1 g), it follows that for all f and almost every x. Hence the Markov kernel of P 2t has a density with respect to µ, given by 3. Conclusion. It follows from step 2 that the semigroup (P t ) t≥0 at time t > 0 admits a Markov kernel density p t (x, y) with respect to µ, such that for all real number a and all C 2 bounded map ψ on R d with |∇ψ| ≤ 1 and |∆ψ| ≤ ρ, the bound hold for almost every ( We now let t > 0, x and y be fixed with y = x. Letting r = |x| ∨ |y| and n = y−x |y−x| , we let ψ be a C 2 bounded map on R d such that |∇ψ| ≤ 1 and |∆ψ| ≤ ρ everywhere, and such that ψ(z) = z · n if |z| ≤ r. For instance we let h(z) be a C 2 map on R with h(z) = z if |z| ≤ r, h constant for |z| ≥ R and satisfying |h ′ | ≤ 1 and |h ′′ | ≤ ρ, which is possible for R large enough compared to ρ −1 ; then we let ψ(z) = h(z · n). Such a map ψ satisfies ψ(x) − ψ(y) = −|x − y|, so the quantity in the exponential in (19) is Since ρ > 0 is arbitrary (and the constant C depends only on U and V ) we can let ρ tend to 0. Then we use the bound |aβ| ≤ εa 2 + 1 4ε β 2 and optimise the obtained quantity by choosing a = |x − y|/(2t(1 + ε)), leading to the bound and concluding the argument. Remark 11. The computation in steps 1 and 2 of this proof could be written in the more general setting of a reversible diffusion generator L on a space E with carré du champ Γ, under the assumptions Γ(ψ) ≤ 1, L(V ϕ −1 ) ≤ KV ϕ −1 and for all g and s ≤ s 0 . Then step 3 would yield the corresponding bound with |x − y| replaced by the intrinsic distance ρ(x, y) = sup |ψ(x) − ψ(y)|; Γ(ψ) ≤ 1 . Proof of Remark 6. First observe that, under the first hypothesis in (9) in Corollary 4, with α ≥ 1, then (15) in the proof of this corollary ensures that for all x and for positive constants b and C. This is a sufficient Lyapunov condition for µ to satisfy a Poincaré inequality (see [1,Th. 1.4]). Then we can adapt an argument in [3,Sect. 7.4], that we recall for convenience. We slightly modify the notation of the proof of Corollary 4, letting a = 0 (hence ϕ = 1), and Then, by the Poincaré inequality for µ, there exists a constant K > 0 such that for all t 0 > 0, by step 2 in the proof of Corollary 4, so for all t ≥ 0 and t 0 > 0. Changing t + 2t 0 into t ≥ t 0 and writing the kernel density of R t − R ∞ in terms of the kernel density p t (x, y) of P t lead to the announced bound on the density p t (x, y) − 1. Remark 12. The proof of Theorem 5 simplifies in ultracontractive situations where one takes 1 as a weight V . For instance, for the heat semigroup on R d , one starts from the (non weighted) super Poincaré inequality for the Lebesgue measure on R d : for all u > 0 and for a constant c d depending only on d (a linearization of the Nash inequality (1) for the Lebesgue measure on R d , which can also be recovered by letting r go to +∞ in (13)). The constant K obtained in step 1.1 is K = ρ|a| + a 2 , and the very same argument leads to the (optimal in t) off-diagonal bound for all t > 0 (also derived in [3, Sect. 7.2] for instance with the optimal C = (4π) −d/2 when starting from by the Euclidean logarithmic Sobolev inequality). The argument can also be written in the ultracontractive case of U (x) = (1 + |x| 2 ) α/2 with α > 2 : one starts from the super Poincaré inequality F 2 dµ ≤ u |∇F | 2 dµ + e c 1+u − α 2α−2 |F |dµ for all u > 0 (see [17,Cor. 2.5] for instance). Then one obtains the off-diagonal bound p t (x, y) ≤ e C 1+t − α α−2 e − |x−y| 2 4t for all t > 0, also derived in [3,Sect. 7.3] by means of an adapted entropy-energy inequality.
5,353.6
2013-11-09T00:00:00.000
[ "Mathematics" ]
Article Front-End Light Source for a Waveform-Controlled High-Contrast Few-Cycle Laser System for High-Repetition Rate Relativistic Optics We present the current development of an injector for a high-contrast, ultrashort laser system devoted to relativistic laser-plasma interaction in the few-cycle regime. The front-end is based on CEP-stabilized Ti:Sa CPA followed by XPW filter designed at the mJ level for temporal cleaning and shortening. Accurate characterization highlights the fidelity of the proposed injector. Measured CEP drift is 170 mrad rms. Introduction A recent breakthrough of ultrafast laser science has been the ability to produce laser pulses with duration close to the optical cycle (2.7 fs at 800 nm) with controlled waveform, i.e., controlled carrier-envelope phase (CEP).The availability of such pulses has provided new insight into attosecond electronic processes in matter [1].CEP-controlled few-cycle pulses are routinely used at moderate intensities to probe attosecond electronic dynamics in atoms or molecules.More recently, few-cycle pulses with relativistic intensities (above 10 18 W/cm 2 at 800 nm) have been used to study collective electronic dynamics in laser-driven plasmas, such as few-fs electron bunch acceleration from gas jets [2,3] or high-harmonic generation from solid targets [4].Unfortunately, all these pioneering experiments were performed without CEP control. Recent experiments at Laboratoire d'Optique Appliquée have demonstrated, for the first time, attosecond time-scale control over plasma electron dynamics using a CEP-controlled few-cycle laser system at 1 kHz (Salle Noire), as well as the ability to isolate single attosecond pulses from the harmonic emission via spatio-temporal gating of the laser-plasma interaction [5,6].Moreover, few-cycle laser-driven energetic particle acceleration was also demonstrated using the same experimental set-up [7].These pioneer experiments were performed with the Salle Noire laser, delivering 1.2 mJ, 5 fs pulses and a relative CEP drift of 250 mrad rms [8].Reaching the relativistic regime with few-cycle pulses, especially on solid targets, requires both an increase in the pulse energy and temporal contrast.As a consequence, we are in the process of upgrading our laser system to produce CEP-controlled, 5 mJ, 5 fs pulses with a temporal contrast ratio better than 10 10 . Nowadays, two major techniques are widely used to efficiently enhance the final pulse contrast: plasma mirrors at the end of the laser chain [9,10] and nonlinear temporal filters in a double-CPA scheme [11].In the past few years, we have developed and qualified a nonlinear filtering device relying on cross-polarized wave (XPW) generation [12].XPW generation is an achromatic and degenerated four wave mixing process relying on the anisotropy of the χ 3 tensor in isotropic crystals [13].When an intense, linearly polarized input beam (fundamental) illuminates a nonlinear crystal placed between crossed polarizers, it gives rise to an orthogonally polarized wave (XPW).The cubic dependence between output and input intensities provides an improvement in temporal contrast that is limited only by the extinction ratio of the crossed polarizers.Nowadays, XPW-based contrast enhancement has become widespread in high peak-power laser systems [10,[14][15][16][17]. The planned upgraded Salle Noire laser will first consist of a Ti:Sa double-CPA system, providing 1 kHz, 10 mJ, 25 fs pulses.The first CPA is a commercial CEP-stabilized laser system.The compressed pulses are then temporally filtered by an XPW setup and stretched to about 50 ps during propagation through a SF57 bulk and an AOPDF (Dazzler).Further amplification will occur in two multipass amplifiers.Pulse compression will be achieved in a specially designed, high-energy, grism compressor [18].It has been shown that XPW process and compression in grisms both preserve CEP stability [19].The double-CPA is therefore expected to produce high-contrast, multi-mJ, CEP-stabilized pulses.Further post-compression in a hollow-core fiber setup to the few-cycle regime will aim at providing around 5 mJ, 5 fs pulses for relativistic laser-plasma interaction [8,20]. In this paper, we present a detailed characterization of the CEP-controlled, high-contrast injector for the second CPA, consisting of a Ti:Sa CPA followed by a high-efficiency XPW contrast filter at the mJ level.We demonstrate that this injector fulfills all the critical requirements in order to seed the second CPA stage: high-contrast, high spatio-temporal quality and high fidelity in terms of stability.The added benefit of XPW filtering is the significant spectral broadening that ensues, which will allow us to obtain very short pulses at the output of the future second CPA stage.The current front-end light source routinely delivers 300 µJ pulses, with a Gaussian spectrum with more than 100 nm bandwidth (FWHM).Temporal compression of the XPW pulse to 9.3 fs is demonstrated.The results show that XPW filtering can be used as a simple and efficient pulse shortener. Experimental Setup The proposed laser system, described on Figure 1, consists of a commercial 1 kHz CEP-stabilized amplifier (Femtopower Compact Pro CE Phase, Femtolasers GmbH) followed by an XPW contrast filter.The oscillator pulses (Rainbow) are CEP locked via pump-laser amplitude modulation and stretched in a 20 cm-long SF57 block before amplification up to 1.9 mJ in a ten-pass titanium-sapphire amplifier.The amplification stage is pumped by 11 W from a 1 kHz frequency-doubled Q-switched Nd:YLF laser (JADE, Thales).An AOPDF (low-jitter Dazzler HR800, Fastlite) is inserted between the fourth and fifth passes in the amplifier in order to optimize the spectral amplitude and phase.Finally, the 7 ps long amplified pulses are then compressed through a transmission-grating compressor (TGC) with an overall efficiency of 85%.The TGC consists of a pair of 1280 g /mm gratings used at the Littrow incidence angle (30.8 • ) and separated by 10 mm.We have already shown that this compression setup is fully compatible with CEP stabilization [21].The compressed pulse duration is slightly below 30 fs and the Gaussian output beam has a diameter of 10 mm (1/e 2 ).Self-referenced spectral interferometry (Wizzler device [22]) (Figure 2) enables the fine-tuning of the residual spectral phase of the compressed pulses, which is critical before they are sent into the XPW filtering stage [23].Typical energy stability at this stage is around 1% RMS over hours.In our current setup, the XPW filter is an optimized version of that described in [24].Thanks to the TGC providing pure polarization and thanks to little propagation between the TGC and the XPW filter, the use of an input polarizer is not necessary.The XPW setup consists of a 47 cm, 250 µm inner diameter hollow-core waveguide providing efficient (75% transmission) spatial filtering, followed by two nonlinear BaF 2 crystals, all under vacuum.Coupling of the beam into the hollow-core fiber is achieved using a 1.75 m focal length mirror.Slow drift of the input beam pointing is corrected using a home-built beam stabilization system, ensuring constant long-term fiber transmission.Two thin, 1.5 mm thick, BaF 2 crystals with holographic crystallographic orientation are employed for XPW conversion, situated 35 cm and 41 cm respectively after the fiber output.The position of the first crystal corresponds to a fundamental beam diameter of 1.7 mm (1/e 2 ), adapted to the incident 1 mJ.This scheme does not correspond to the standard two-crystal scheme described in [25], where phase-matched XPW generation is ensured by Kerr focusing of the fundamental beam between the two crystals.In this scheme, the fundamental beam diverges out of the fiber and thus self-focusing in the first crystal mainly collimates the fundamental beam.However, we noticed that using two thin crystals instead of a thicker one leads to a slightly higher output energy and improved pulse-to-pulse stability.Because the XPW is driven closer into saturation, the pump spatial profile is depleted during XPW conversion and free-space propagation enables to optimize the fundamental spatial beam profile and improve the XPW conversion efficiency in the second crystal.A direct consequence of the improved efficiency XPW close to saturation is the enhanced pulse-to-pulse stability of the device.The advantage of this approach is that it is scalable to higher energies [24].Here, the output beam is then expanded to go through the output Glan polarizer enabling XPW pulse selection.300 µJ pulses are routinely generated, corresponding to a XPW internal efficiency as high as 33% (fiber transmission and reflection losses on crystals deduced) and a global energy transmission of 22%. 3. Characterization of Front-End Light Source Performance Spectro-Temporal Characterization The pulse contrast improvement cannot be measured precisely with enough dynamic range at this energy level.However, as previously demonstrated [26], the extinction ratio of the polarization management is the only limit to the overall contrast enhancement.In the current system, we measured an extinction ratio of 4 orders of magnitude, which, starting from a contrast ratio of the order of 10 8 [24], leads us to infer the temporal contrast of front-end system to be of the order of 10 12 . The initial fundamental and XPW spectra are compared in Figure 2(a).The initial laser spectrum is shaped into a perfectly Gaussian spectrum during the nonlinear process.This feature is a direct consequence of the XPW temporal cleaning properties [27][28][29].This is an essential point for the final spatio-temporal quality of the output pulses. Moreover, extensive spectral broadening is measured.The calculated FWHM spectral bandwidth is increased by a factor of 2.4.The expected temporal shortening (the ratio of input laser pulse duration over output XPW pulse duration using Fourier-transform limited values) is around 3. The discrepancy comes from the differing spectral intensity profiles.The output spectrum supports sub-10 fs duration (8.6 fs Fourier transform limited).At lower efficiencies, for an unchirped pulse, spectral broadening following XPW is a direct consequence of the shortening of the pulse duration by a factor of 1.7 [23]. Here, the higher XPW efficiency is associated with important self-phase modulation (SPM) on the fundamental beam, and thus significant spectral broadening, a feature that is transferred onto the XPW wave.Simultaneous pulse shortening during the process prevents the XPW pulse from spectral and temporal distortions [30].The higher the XPW efficiency, the more pronounced the broadening.This is illustrated in Figure 3, displaying the spectral evolution of the fundamental and XPW waves with respect to input pulse energy.While the fundamental spectrum exhibits features typical of SPM, the process provides smooth and homogeneous broadening of the XPW spectrum.These clean spectral features indicate that SPM on the XPW beam is negligible.Although significant spectral broadening and reshaping during XPW have already been observed before [27], they are particularly emphasized here.This results from the combination of high efficiency and perfect spectral phase correction thanks to the spectral phase-loop optimization of the Wizzler device (Figure 2(a)).In order to characterize pulse fidelity of the front-end, we then verified the spectral repeatability.The short-term stability is measured by the acquisition of 500 consecutive spectra of the amplified and XPW pulses.Despite the fact that broadening is due to nonlinear effects, spectral stability is not degraded after XPW and remains below 2% RMS across the entire spectrum.The long-term stability was monitored by recording the spectrum at regular intervals over 90 nm, showing only minor variations (Figure 2(b)).The available spectral bandwidth for further amplification will therefore not be significantly affected by the weak energy fluctuations of the input amplifier beam. For temporal compression and characterization, the XPW pulse is discriminated by a reflective Ga polarizer to minimize the propagation in dispersive elements.The dispersion introduced by the crystals and chamber output window is compensated by chirped mirrors.Temporal measurement is provided by SRSI (self-referenced spectral interferometry, Wizzler) adapted to ultrashort pulses [31].The result indicate a pulse duration of 9.3 fs (Figure 2(c)), confirming a compression factor of 3 during the XPW process.Spectral phase oscillations are typical of chirped mirrors, ultimately limiting the spectro-temporal quality of the XPW pulse. Spectro-Spatial Characterization The creation of new frequencies by SPM, beyond the typical 1.7 spectral broadening factor, is intensity-dependent and therefore varies across the Gaussian input beam profile.This feature affects the spatial homogeneity of the XPW pulse spectrum, as shown in Figure 4(a), representing the measured XPW spectrum through a 0.2 mm pinhole translated across the beam, 15 cm after the second crystal.The spectral FWHM varies between 60 nm at the edges and 110 nm at the center (Fourier transform limited durations: 15 fs and 8.5 fs, respectively).However, the same measurement performed on the beam after 2 m propagation (far-field) exhibits a broad and homogeneous spectrum across the beam (Figure 4(b)).The complex spatio-temporal dynamics of this process is currently under study.In first approximation, it can be explained by cross-phase modulation of the XPW beam by the intense pump beam.It has recently been demonstrated that wavelength-dependent nonlinear focusing follows nonlinear spectral phase [32].Consequently, extreme wavelengths, created at the highest beam intensity, may experience stronger self-focusing, leading to nonlinear wavelength-dependent propagation after the crystal that compensates the initial spatio-spectral inhomogeneity.The final spectro-spatial homogeneity is not a particular feature of the double-crystal scheme and the same results are obtained with a single crystal.To summarize, although nonlinear self-action is negligible for the XPW beam, SPM and Kerr focusing undergone by the pump beam confer to the XPW pulse a broad, spatially uniform, Gaussian spectrum, optimal for further amplification.The measurements presented in Figure 4 underscore the excellent spatial quality of the output beam in the far field.Figure 4(c) shows the intensity profile of the pulses measured at the focus of a 500 mm lens.The spatial profile is Gaussian with a beam Strehl ratio of about 0.9.The measured shot-to-shot beam pointing stability of the beam after XPW is 30 µrad (8 µrad rms).This is yet another direct advantage of the wave guided XPW setup. This simple and efficient XPW setup can consequently be used as a pulse shortener device, providing excellent spatial and spectro-temporal quality. Fidelity of the Front-End To further quantify the fidelity of the front-end system, long-term energy stability is monitored continuously and is usually below 2% rms after XPW. Figure 5(a) is an illustration of typical energy fluctuations over a few minutes of the pump laser, amplified and filtered pulses.CEP stability of the front-end system is the last critical feature to be characterized.To estimate it, a few microjoules of the XPW pulse are split off, compressed with chirped mirrors and sent into a collinear f-to-2f interferometer (APS 800, Menlo Systems).The slow CEP drift of the output pulses at 1 kHz is pre-compensated by a feedback loop to the oscillator locking electronics.For the measurement, the spectrometer acquisition time is 1 ms and the cycle loop time of the APS software is 100 ms.Results are shown in Figure 5(b).The rms phase error is 170 mrad over 16 min.Although slightly higher than the amplifier (typically 100 mrad rms), this value confirms the robustness of the XPW filter.We estimate the degradation of the CEP stability to be partly due to the measurement itself, which is affected both by the input pulse-to-pulse stability and by the limited ability of the APS device to handle such short (spectrally broad) pulses. Conclusions To conclude, we have described a front-end light source featuring high contrast and high spatial and spectral quality pulses of potential sub-10 fs duration.The overall CEP fluctuations of the system can be reliably kept well under 200 mrad rms.We expect this excellent pulse fidelity to be improved or at least preserved thanks to careful management of saturation effects in any further amplification stages.Although the effect of a second CPA on the overall CEP stability is unknown, its measurement should benefit from an improved pulse-to-pulse energy stability and accurate compression of the laser pulse.This work also shows that XPW acts as a temporal post-compression device, producing sub-10 fs pulses with exceptional spatio-temporal fidelity.Previous validation of this setup with an input energy as high as 11 mJ [33] demonstrates that XPW can be configured as an efficient multi-mJ post-compression device. Figure 2 . Figure 2. (a) Initial fundamental spectral amplitude (shaded area) and phase (dashed black line), measured after the TGC.XPW spectrum (red line) with a Gaussian fit (dashed blue line, 105 nm FWHM); (b) Density plot of XPW spectra (averaged over 500 shots) recorded every 5 nm over 90 nm.Lower spread corresponds to higher stability; (c) Temporal intensity profile and residual spectral phase of the compressed XPW pulse measured by SRSI. Figure 3 . Figure 3. Fundamental (measured after the nonlinear interaction) and XPW normalized spectra as a function of laser input energy.Each spectrum is averaged over 500 shots. Figure 4 . Figure 4. Spectro-spatial distribution of the XPW beam, 15 cm after the second crystal (a) and propagated over 2 m (b).Spectral FWHM measured across the beam profile are indicated with white diamonds; (c) Spatial intensity distribution of the XPW beam at the focus of a 500 mm lens. Figure 5 . Figure 5. (a) Simultaneous measurements of energy fluctuations of the pump laser, amplified and filtered pulses over 16 min.Over one hour, typical rms energy deviations of the pump, amplifier and XPW pulses are 0.3%, 1% and 1.6% respectively; (b) Relative CEP drift with feedback control on the oscillator of the amplifier (measured after the XPW setup with no BaF 2 crystal) and XPW pulses.Energy and CEP stability measurements have been done on the same day under similar experimental conditions.
3,872.4
2013-03-18T00:00:00.000
[ "Physics" ]
Emergent Inert Adjoint Scalar Field in SU(2) Yang-Mills Thermodynamics due to Coarse-Grained Topological Fluctuations We compute the phase and the modulus of an energy- and pressure-free, composite, adjoint, and inert field φ in an SU (cid:2) 2 (cid:3) Yang-Mills theory at large temperatures. This field is physically relevant in describing part of the ground-state structure and the quasiparticle masses of excitations. The field φ possesses nontrivial S 1 -winding on the group manifold S 3 . Even at asymptotically high temperatures, where the theory reaches its Stefan-Boltzmann limit, the field φ , though strongly power suppressed, is conceptually relevant: its presence resolves the infrared problem of thermal perturbation theory. Introduction In 1, 2 one of us has put forward an analytical and nonperturbative approach to SU 2 /SU 3 Yang-Mills thermodynamics. Each of these theories comes in three phases: deconfining electric phase , preconfining magnetic phase , and completely confining center phase . This approach assumes the existence of a composite, adjoint Higgs field φ, describing part of the thermal ground state, that is, the BPS saturated topologically nontrivial sector of the theory. The field φ is generated by a spatial average over noninteracting trivial-holonomy SU 2 calorons 3 which can be embedded in SU 3 . The "condensation" 1 of trivial-holonomy SU 2 calorons into the field φ must take place at an asymptotically high temperature 1, 2 , that is, at the limit of applicability of the gauge-field theoretical description. For any physics model formulated in terms of an SU 2 /SU 3 Yang-Mills theory this is to say that caloron "condensation" takes place at T ∼ M P where M P denotes 2 ISRN High Energy Physics the Planck mass. Since |φ| ∼ Λ 3 /2πT topological defects only marginally deform the idealgas expressions for thermodynamical quantities at T Λ. Here Λ denotes the Yang-Mills scale. Every contribution to a thermodynamical quantity, which arises from the topologically nontrivial sector, is power suppressed in temperature. As a consequence, the effective theory is asymptotically free and exhibits, though in a quantitatively different way, the infraredultraviolet decoupling property 1, 2 seen in renormalized perturbation theory 4-7 . Asymptotic freedom is a conceptually very appealing property of SU N Yang-Mills theories. It first was discovered in perturbation theory 8-11 . In the effective thermal theory, interactions between trivial-holonomy calorons in the ground state are taken into account by obtaining a pure-gauge solution to the classical equations of motion for the topologically trivial sector in the nonfluctuating and nonbackreacting background φ. Thus the partition function of the fundamental theory is evaluated in three steps in the deconfining phase: i integrate over the admissible part of the moduli space for the caloron-anticaloron system and spatially average over the associated two-point correlations to derive the classical and temperature dependent dynamics of an adjoint, spatially homogeneous scalar field φ, ii establish the quantum mechanical and statistical inertness of φ and use it as a temperature-dependent background to find a pure-gauge solution a bg μ to the Yang-Mills equations describing the trivial-topology sector. Together, φ and a bg μ constitute the thermal ground state of the system. The fact that the action for the ground-state configurations φ and a bg μ is infinite is unproblematic since the corresponding, vanishing weight in the partition function is associated with a nonfluctuating configuration and therefore can be factored out and is cancelled when evaluating expectation values in the effective theory. iii Consider the interactions between the macroscopic ground state and trivial-topology fluctuations in terms of quasiparticle masses of the latter which are generated by the adjoint Higgs mechanism 2 and impose thermodynamical self-consistency to derive an evolution equation for the effective gauge coupling e. In the following, we will restrict our discussion to the case SU 2 . Isolated magnetic charges are generated by dissociating calorons of sufficiently large holonomy 12-23 ; for a quick explanation of the term holonomy, see Figure 1. Nontrivial holonomy is locally excited by interactions between trivial-holonomy calorons and anticalorons mediated by plane-wave configurations. In 18 it was shown as a result of a heroic calculation that small large holonomy leads to an attractive repulsive potential between the monopole and the antimonopole constituents of a given caloron. An attraction between a monopole and an antimonopole leads to annihilation once the distance between their centers is comparable to the sum of their charge radii. Thermodynamically, the probability for exciting a small holonomy is much larger than that for exciting a large holonomy. In the former case, this probability roughly is determined by the one-loop quantum weight of a trivial holonomy caloron, while in the latter case both monopole constituents have a combined mass ∼ 4π 2 T ∼ 39T 14 . Thus an attractive potential between a monopole and its antimonopole is the by far dominating situation. This is the microscopic reason for a negative ground-state pressure P g.s. which, on spatial average, turns out to be P g.s. −4πΛ 3 T the equation of ground state is ρ g.s. −P g.s. 1, 2 . In the unlikely case of repulsion large holonomy the monopole and the antimonopole separate back to back until their interaction is sufficiently screened to facilitate their existence in isolation as long as the overall pressure of the system is positive . Magnetic monopoles in isolation do not contribute to the pressure of the system 3 . The overall pressure is positive if the gauge-field fluctuations after spatial coars graining are sufficiently light and energetic to over compensate the negative ground-state contribution, that is, if the temperature is sufficiently large. Caloron-induced tree-level masses for gauge-field modes decay as 1/ √ T when heating up the system. Due to the linear rise of ρ g.s. with T the thermodynamics of the ground state is thus subdominant at large temperatures 4 . The main purpose of the present work is to compute and to discuss the dynamical generation of an adjoint, macroscopic, and composite scalar field φ. This is a first-principle analysis of the ground-state structure in the electric phase of an SU 2 Yang-Mills theory. The paper is organized as follows. In Section 2 we write down and discuss a nonlocal definition, relevant for the determination of φ's phase, in terms of a spatial and scale-parameter average over an adjointly transforming 2-point function. This average needs to be evaluated on trivial-holonomy caloron and anticaloron configurations at a given time τ. In Section 3 we perform the average and discuss the occurrence of a global gauge freedom in φ's phase, which has a geometrical interpretation. In Section 4, we show how the derived information about a nontrivial S 1 winding of the field φ together with analyticity of the right-hand side of the associated BPS equation and with the assumption of the existence of an externally given scale Λ can be used to uniquely construct a potential determining φ's classical and temperature dependent dynamics. In Section 5 we summarize and discuss our results and give an outlook on future research. Definition of φ's Phase In this section we discuss the BPS saturated, topological part of the ground-state physics in the electric phase of an SU 2 Yang-Mills theory. According to the approach in 1, 2 the adjoint scalar φ emerges as an energy-and pressure-free BPS saturated field from a spatial average over the classical correlations in a caloron-anticaloron system of trivial holonomy in absence of interactions. On spatial average, the latter are taken into account by a pure-gauge configuration solving the classical, trivial-topology gauge-field equations in the spatially homogeneous background φ. This is consistent since φ's quantum mechanical and statistical inertness can be established. Without assuming the existence of a Yang-Mills scale Λ only φ's phase, that is φ/|φ| , can be computed. A computation of φ itself requires the existence of Λ. As we will see, the information about the S 1 winding of φ's phase together with the analyticity of the right-hand side of φ's BPS equation uniquely determines φ's modulus in terms of Λ and T . 2.3 The solutions in 2.2 the superscript A C refers to anti caloron are generated by a temporal mirror sum of the pre potential Π of a single anti instanton in singular gauge 24-26 . They have the same color orientation as the "seed" instanton or "seed" anti-instanton. In 2.2 λ a , a 1, 2, 3 , denote the Pauli matrices. The "nonperturbative" definition of the gauge field is used were the gauge coupling constant g is absorbed into the field. The scalar function Π τ, x is given as 3 where r ≡ |x|, β ≡ 1/T , and ρ denotes the scale parameter whose variation leaves the classical action S 8π 2 /g 2 invariant. At a given ρ the solutions in 2.2 can be generalized by shifting the center from z 0 to z τ z , z by the quasi translational invariance of the classical action 6 S. Another set of moduli is associated with global color rotations of the solutions in 2.2 . From the BPS saturation it follows that the Euclidean energy-momentum tensor θ μν , evaluated on A C,A μ , vanishes identically This property translates to the macroscopic field φ with energy-momentum tensor θ μν in an effective theory since φ is obtained by a spatial average over caloron-anticaloron correlations neglecting their interactions 7 The field φ is spatially homogeneous since it emerges from a spatial average. If the action density governing φ's dynamics in the absence of caloron interactions contains a kinetic term quadratic in the τ-derivatives and a potential V then 2.7 is equivalent to φ solving the firstorder equation In 2.8 the right-hand side will turn out to be determined only up to a global gauge rotation, see Figure 2. Already at this point it is important to remark that the Yang-Mills scale parametrizes the potential V and thus also the classical solution to 2.8 . In the absence of trivial-topology fluctuations it is, however, invisible, see 2.7 . Only after the macroscopic equation of motion for the trivial-topology sector is solved for a pure-gauge configuration in the background φ does the existence of a Yang-Mills scale become manifest by a nonvanishing ground-state pressure and a nonvanishing ground-state energy density 1, 2 . Hence the trace anomaly θ μμ / 0 for the total energy-momentum tensor θ μν ≡ θ g.s. μν θ fluc μν in the effective theory which includes the effects of trivial-topology fluctuations: since θ g.s. μν 4πTΛ 3 δ μν and θ fluc μν ∝ T 4 for T Λ the trace anomaly dies off as Λ 3 /T 3 . Without imposing constraints other than nonlocality 9 the τ dependence of φ's phase the ratio of the two averages φ and |φ| over admissible moduli deformations, A C,A μ would naively be characterized as 2.9 The dots in 2.9 stand for the contributions of higher n-point functions and for reducible, that is, factorizable, contributions with respect to the spatial integrations. In 2.9 the following definitions apply: 2.10 The integral in the Wilson lines in 2.10 is along a straight line 10 connecting the points τ, 0 and τ, x , and P denotes the path-ordering symbol. Under a microscopic gauge transformation Ω y the following relations hold: 2.11 As a consequence of 2.11 the right-hand side of 2.9 transforms as Thus we have defined an adjointly transforming scalar in 2.9 . Moreover, only the timedependent part of a microscopic gauge transformation survives after spatial averaging macroscopic level . In 2.9 the ∼ sign indicates that both left-and right-hand sides satisfy the same homogeneous evolution equation in τ Here D is a differential operator such that 2.14 represents a homogeneous differential equation. As it will turn out, 2.14 is a linear second-order equation which, up to global gauge rotations, determines the first-order or BPS equation whose solution φ's phase is. Each term in the series in 2.9 is understood as a sum over the two solutions in 2.2 , that is, A α A C α or A α A A α . As we will show in Section 3, the dimensionless quantity defined on the right-hand side of 2.9 is ambiguous 11 ; the operator D, however, is not. The quantities appearing in the numerator and denominator of the left-hand side of 2.9 are understood as functional and spatial averages over the appropriate multilocal operators, being built of the field strength and the gauge field. The functional average is restricted to the moduli spaces of A α A C α and A α A A α excluding global color rotations and time translations. Let us explain this in more detail. For the gauge variant density in 2.9 an average over global color rotations and time shifts τ → τ τ z 0 ≤ τ z ≤ β would yield zero and thus is forbidden 12 . The nonflatness of the measure with respect to the separate ρ integration in the numerator and the denominator average in 2.9 transforms into flatness by taking the ratio. Since the integration weight exp −S is independent of temperature on the moduli space of a caloron, the right-hand side of 2.9 must not exhibit an explicit temperature dependence. This forbids the contribution of n-point functions with n > 2, and we are left with an investigation of the first term in 2.9 . In the absence of a fixed mass scale on the classical level an average overspatial translations would have a dimensionful measure d 3 z making the definition of a dimensionless quantity ∼ φ/|φ| impossible. We conclude that the average overspatial translations is already performed in 2.9 . Since one of the two available length scales ρ and β parametrizing the caloron or the anticaloron is integrated over in 2.9 , the only scale responsible for a nontrivial τ dependence of φ a /|φ| is β. What about the contribution of calorons with a topological charge modulus larger than unity? Let us consider the charge-two case. Here we have three moduli of dimension length which should enter the average defining the differential operator D: two scale parameters and a core separation. The reader may easily convince himself that by the absence of an explicit temperature dependence it is impossible to define the associated dimensionless quantity in terms of spatial and moduli averages over n-point functions involving these configurations. The situation is even worse for calorons of topological charge larger than two. We conclude that only calorons of topological charge ±1 contribute to the definition of the operator D in 2.14 by means of 2.9 . Computation of Two-Point Correlation Before we perform the actual calculation let us stress some simplifying properties of the solutions A C,A μ in 2.2 . The path-ordering prescription for the Wilson lines { τ, 0 , τ, x } and { τ, 0 , τ, x } † in 2.10 can actually be omitted. To see this, we first consider the quantity P C,A τ, rst defined as The vector t denotes the unit line tangential along the straight line connecting the points τ, 0 and τ, x ≡ τ, rt . We have Thus the path-ordering symbol can, indeed, be omitted in 3.2 . The field strength F C aμν on the caloron solution in 2.2 is where Π is defined in 2.4 . For the anticaloron one replaces η by η in 3.4 . Using 2.9 , 3.2 , and 3.4 , we obtain the following expression for the contribution φ a /|φ| | C arising from calorons: The dependences on ρ and β are suppressed in the integrands of 3.5 and 3.6 . It is worth mentioning that the integrand in 3.6 is proportional to δ s for r β. A useful set of identities is .8 states that the integrand for φ a /|φ| | A can be obtained by a parity transformation x → −x of the integrand for φ a /|φ| | C . Since the latter changes its sign, see 3.5 ; one naively would conclude that This, however, would only be the case if no ambiguity in evaluating the integral in both cases existed. But such ambiguities do occur! First, the τ dependence of the anticaloron's contribution may be shifted as compared to that of the caloron. Second, the color orientation of caloron and anticaloron contributions may be different. Third, the normalization of the two contributions may be different. To see this, we need to investigate the convergence properties of the radial integration in 3.5 . It is easily checked that all terms give rise to a converging r integration except for the following one: Namely, for r > R β 3.10 goes over in 4 t a πρ 2 sin 2g τ, r βr 3 . 3.11 Thus the r-integral of the term in 3.10 is logarithmically divergent in the infrared 13 : 4t a πρ 2 β ∞ R dr r sin 2g τ, r . 3.12 Recall that g τ, r behaves like a constant in r for r > R. The angular integration, on the other hand, would yield zero if the radial integration was regular. Thus a logarithmic divergence can be cancelled by the vanishing angular integral to yield some finite and real but undetermined normalization of the emerging τ dependence. To investigate this, both angular and radial integration need to regularized. We may regularize the r integral in 3.12 by prescribing 3.14 Away from the pole at 0 this is regular. For < 0 3.14 can be regarded as a legitimate analytical continuation. An ambiguity inherent in 3.14 relates to how one circumvents the pole in the smeared expression 3.15 Concerning the regularization of the angular integration we may introduce defect or surplus angles 2η in the azimuthal integration as see Figure 3. In 3.16 α C is a constant angle with 0 ≤ α C ≤ 2π and 0 < η 1. Obviously, this regularization singles out the x 1 x 2 plane. As we will show below, the choice of regularization plane translates into a global gauge choice for the τ dependence of φ's phase and thus is physically irrelevant: the apparent breaking of rotational symmetry by the angular regularization translates into a gauge choice. 3.17 To see what is going on we may fix, for the time being, the ratio η /η for the normalization of the caloron contribution to a finite and positive but otherwise arbitrary constant Ξ when sending η and η to zero in the end of the calculation: 3.19 where A is a dimensionless function of its dimensionless argument. The sign ambiguity in 3.19 arises from the ambiguity associated with the way how one circumvents the pole in 3.15 and whether one introduces a surplus or a defect angle in 3.16 . Furthermore, there is an ambiguity associated with a constant shift τ → τ τ C 0 ≤ τ C ≤ β in 3.19 . For the anticaloron contribution we may, for the time being, fix the ratio η /η to another finite and positive constant Ξ . In analogy to the caloron case, there is the ambiguity related to a shift τ → τ τ A 0 ≤ τ A ≤ β in the anticaloron contribution. Moreover, we may without 12 ISRN High Energy Physics restriction of generality global gauge choice use an axis for the angular regularization which also lies in the x 1 x 2 plane, but with a different angle α A . Then we have where the choices of signs in either contribution are independent. Equation 3.20 is the basis for fixing the operator D in 2.14 . To evaluate the function A 2πτ/β in 3.19 numerically, we introduce the same cutoff for the ρ integration in the caloron and anticaloron case as follows: This introduces an additional dependence of A on ζ. In Figure 4 the τ dependence of A for various values of ζ is depicted. It can be seen that 3.22 Therefore we have 3.23 The numbers ζ 3 Ξ, ζ 3 Ξ , τ C /β, and τ A /β in 3.23 are undetermined. For each color orientation corresponding to a given angular regularization there are two independent parameters, a normalization and a phase shift. The principal impossibility to fix the normalizations reflects the fact that on the classical level the theory is invariant under spatial dilatations. To give a meaning to these number, a mass scale needs to be generated dynamically. This, however, can only happen due to dimensional transmutation, which is known to be an effect induced by trivial-topology fluctuations 8-11 . The result in 3.23 is highly nontrivial since it is obtained only after an integration over the entire admissible part of the moduli spaces of anti calorons is performed. Let us now discuss the physical content of 3.23 . For fixed values of the parameters ζ 3 Ξ, ζ 3 Ξ , τ C /β, and τ A /β the right-hand side of 3.23 resembles a fixed elliptic polarization in the x 1 x 2 plane of adjoint color space. For a given polarization plane the two independent numbers normalization and phase-shift of each oscillation axis parametrize the solution space in total four undetermined parameters of a second-order linear differential equation From 3.23 we observe that the operator D is 3.25 Since for a given polarization plane there is a one-to-one map from the solution space of 3.24 to the parameter space associated with the ambiguities in the definition 2.9 we conclude that the operator D is uniquely determined by 2.9 . What we need to assure the validity of 2.7 is a BPS saturation 14 of the solution to 3.24 . Thus we need to find first-order equations whose solutions solve the second-order equation 3.24 . The relevant two first-order equations are where we have defined φ |φ| φ τ . Obviously, the right-hand sides of 3.26 are subject to a global gauge ambiguity associated with the choice of plane for angular regularization, any normalized generator other than λ 3 could have appeared, see Figure 2. Now the solution to either of the two equations 3.26 also solves 3.24 , 3.27 Traceless, hermitian solutions to 3.26 are given as where C and τ 0 denote real integration constants which both are undetermined. Notice that the requirement of BPS saturation has reduced the number of undetermined parameters from four to two: an elliptic polarization in the x 1 x 2 plane is cast into a circular polarization. Thus the field φ winds along an S 1 on the group manifold S 3 of SU 2 . Both winding senses appear but cannot be distinguished physically 1, 2 . How to Obtain φ's Modulus Here we show how the information about φ's phase in 3.28 can be used to infer its modulus. Let us assume that a scale Λ is externally given which characterizes this modulus at a given temperature T . Together, Λ and T determine what the minimal physical volume |φ| −3 is for which the spatial average over the caloron-anticaloron system saturates the infinite-volume average appearing in 2.9 . We have In order to reproduce the phase in 3.28 a linear dependence on φ must appear on the righthand side of the BPS equation 2.8 . Furthermore, this right-hand side ought not depend on β explicitly and must be analytic in φ 15 . The two following possibilities exist: has a finite radius of convergence. According to 3.28 we may write when substituting 4.5 into 4.3 . This is acceptable and indicates that at T Λφ's modulus is small. The right-hand side of 4.3 defines the "square-root" V 1/2 of a potential V |φ| ≡ tr V 1/2 † V 1/2 Λ 6 tr φ −2 , and the equation of motion 4.3 can be derived from the following action: Notice that a shift V → V const is forbidden in 4.8 since the relevant equation of motion is the first-order equation 4.3 . After the spatial average is performed the action S φ is extended by including topologically trivial configurations a μ in a minimal fashion: ∂ τ φ → ∂ μ φ ie φ, a μ ≡ D μ φ and an added kinetic term. Here e denotes the effective gauge coupling. Thus the effective Yang-Mills action S is written as where G μν G a μν λ a /2 and G a μν ∂ μ a a ν − ∂ ν a a μ − e abc a b μ a c ν . In 4.2 and 4.3 the existence of the mass scale Λ the Yang-Mills scale was assumed. One attributes the generation of a mass scale to the topologically trivial sector which, however, was assumed to be switched off so far. How can a contradiction be avoided? The answer to this question is that the scale Λ remains hidden as long as topologically trivial fluctuations are switched off; see 2.7 . Only after switching on interactions between trivialholonomy calorons within the ground state can Λ be seen 1, 2 . Let us repeat the derivation of this result: in 1, 2 we have shown that the mass squared of φ-field fluctuations, ∂ 2 |φ| V |φ| , is much larger than the square of the compositeness scale |φ|. Moreover ∂ 2 |φ| V |φ| is much larger than T 2 for all temperatures T ≥ T c,E where T c,E denotes the critical temperature for the electric-magnetic transition. Thus φ is quantum mechanically and statistically inert: it provides a nonbackreacting and undeformable source for the following equation of motion: which follows from the action in 4.9 . A pure-gauge solution to 4.10 , describing the ground state together with φ, is a bg μ π e Tδ μ4 λ 3 . 4.11 As a consequence of 4.11 we have D μ φ ≡ 0, and thus a ground-state pressure P g.s. −4πΛ 3 T and a ground-state energy-density ρ g.s. 4πΛ 3 T are generated in the electric phase: The so far hidden scale Λ becomes visible by averaged-over caloron-anticaloron interactions encoded in the pure-gauge configuration a bg μ . Summary and Outlook Let us summarize our results. We have derived the phase and the modulus of a statistically and quantum mechanically inert adjoint and spatially homogeneous scalar field φ for an SU 2 Yang-Mills theory being in its electric phase. This field and a pure-gauge configuration together suggest the concept of a thermal ground state since they generate temperaturedependent pressure and energy density with an equation of state corresponding to a cosmological constant. The existence of φ originates from the spatial correlations inherent in BPS saturated, trivial-holonomy solutions to the classical Yang-Mills equations at finite temperature: the Harrington-Shepard solutions of topological charge modulus one. To derive φ's phase these field configurations are, in a first step, treated as noninteracting when performing the functional average over the admissible parts of their moduli spaces. We have shown why adjoint scalar fields arising from configurations of higher topological charge do not exist. The BPS saturated and classical field φ possesses nontrivial S 1 winding on the group manifold S 3 . The associated trajectory on S 3 becomes circular and thus a pure phase only after the integration over the entire admissible parts of the moduli spaces is carried out. Together with a pure-gauge configuration the adjoint scalar field φ generates a linear temperature dependence of the ground-state pressure and the ground-state energy density where the pure-gauge configuration solves the Yang-Mills equations in the background φ and, after the spatial average, describes the interactions between trivial-holonomy calorons. The puregauge configuration also makes explicit that the electric phase is deconfining 1, 2 . Since trivial-topology fluctuations may acquire quasiparticle masses on tree level by the adjoint Higgs mechanism 1, 2 , the presence of φ resolves the infrared problem inherent in a perturbative loop expansion of thermodynamical quantities 27 . Since there are kinematical constraints for the maximal hardness of topologically trivial quantum fluctuations, no renormalization procedure for the treatment of ultraviolet divergences is needed in the loop expansion of thermodynamical quantities 27 performed in the effective theory. These kinematical constraints arise from φ's compositeness emerging at distances ∼ |φ| −1 . The usual assertion that the effects of the topologically nontrivial sector are extremely suppressed at high temperature-they turn out to be power suppressed in T -is shown to be correct by taking this sector into account. The theory, indeed, has a Stefan-Boltzmann limit which is very quickly approached. It turns out to be incorrect, however, to neglect the topologically nontrivial sector from the start: assuming T Λ to justify the omission of the topologically nontrivial sector before performing a perturbative loop expansion of thermodynamical quantities does not capture the thermodynamics of an SU 2 Yang-Mills theory and leads to the known problems in the infrared sector 28 .
6,755
2005-01-01T00:00:00.000
[ "Physics" ]
The Relationship Between Scientific Production and Economic Growth Through R&D Investment: A Bibliometric Approach This quantitative bibliometric research measures the efficiency of investment in R&D for the 17 more relevant countries investing in R&D through a novel indicator based on the number of scientific articles (associated with stock markets), produced for every 1% of investment in R&D in terms of GDP. The study is justified by the need to deepen the relationship between investment in R&D and economic growth, and was conducted for developed and emerging countries separately, so that the understanding of which countries or regions’ investment in R&D and its consequent scientific production has the greatest impact over the size of their economies through innovation. Our findings indicate clearly that R&D investment strongly correlates to the economy’s size of the studied countries. In addition to finding our novel indicator statistically significant with respect to economic growth through a series of multiple linear regressions and proposing economic growth not statically, but as a dynamic cumulative effect over time, this becomes more relevant for emerging countries (represented in this study by China, Brazil, India, Russia and Turkey, or BRIC + Turkey) compared to developed ones, which decants into an opportunity for scholars and particularly governments to design or restructure their R&D policies towards innovation INTRODUCTION Economic growth is crucial and one of the three pillars of sustainable development today. [1]A flourishing economy is usually related to the size of its Gross Domestic Product (GDP), its GDP per capita or how agile its GDP growth is. [2]Nevertheless, it is less common among the literature to associate sustained economic growth to the efforts of research and development (R&D) conducted by each country, particularly related to its scientific production, and even more specifically to articles published.The United Nation's Sustainable Development Goals (SDGs) consider economic growth one of its priorities, precisely on SDG 8 "Decent work and economic growth", which at the same time contains, among its targets, achieving economic growth and productivity through innovation. [3]Herzer [4] studied how R&D spending as a percentage of GDP can influence economic growth, although the relation was established as R&D spending versus Total Factor Productivity (TFP) only in developing countries. [5]he author offered positive correlation between R&D spending and TFP, which could lead to the fact that in emerging countries the impact of R&D over economic growth could be higher.Soete et al. [6] studied OECD countries searching for the same as Herzer, [4] with the same findings regarding TFP.Also, there is a very relevant opportunity to narrow the research articles to those related to specific disciplines backed by evidence that state that their development promotes economic growth, such as stock markets and its close relation to economic development. [7]In spite of this, the literature associated to the relationship between scientific production, R&D spending and economic growth is incipient, particularly when including scientific production in the equation, and almost null if framed in stock markets which consider developed and emerging countries in the same study. GDP remains to date as one of the most common ways to measure economic growth. [2]Even though it has some limitations, such as the fact that it is a measurement from today, but does not involve future sustainability or growth, [8] or even being proposed today some other ways of potentially measuring economic growth, such as lighting density in night images of planet Earth that reflect the concentration of lighting in urban areas and associated with economic activity, [9] GDP as traditionally stated permits a standardized and mainly comparable way of tracking economic growth at any country level. [10]In spite of the previous facts, GDP has been involved in bibliometric articles as a cause for different phenomena, or as a classification criterion.For example, Confraria and Godinho [11] used GDP to classify their bibliometric findings among the African countries studied, while De Moya-Anegón and Herrero-Solana [12] considered GDP as a factor for generating more publications, which is the most common scope. [13,14]GDP is also used for normalizing bibliometric results [15] and as an approximation of socio-economic development associated to the countries that publish articles. [16] important edge about the lack of studies that could consider GDP, or particularly GDP growth, not as an independent variable, but rather as a dependent one, is the fact that countries in general search for economic growth, which has been also reflected on the United Nations' SDGs. [3]In other words, countries should be looking for how to accelerate their economic growth.Furthermore, it is stated by the United Nations that innovation must participate towards achieving economic growth, [3] and indeed innovation has been stated to be one of the most important paths to achieve it. [17]According to Greenstone, [18] this can happen by investing in R&D, which coincides with the United Nations including innovation as a way towards economic growth.This would involve a different scope than that offered currently by the literature: R&D investment is needed to achieve innovation, and innovation is needed to reach economic growth. [19]Although R&D is not the only way to reach economic growth, being some other initiatives important, such as financial inclusion, [20] R&D remains highly relevant towards that objective, [21] particularly responsible research and innovation. [22]It is not only a matter of attracting investment towards emerging countries that need to boost their economic growths, [23] but also making sure that those companies that actually invest responsibly and efficiently in R&D get supported by their governments. [24]garding bibliometric studies, although there are studies considering GDP as a variable to measure economic growth [2,10] or as a classification criterion, [11][12][13][14] studies associated with stock markets are scarce.Cicea and Marinescu [25] analyzed the relationship between foreign direct investment and economic growth, but did not include innovation or R&D as variables associated with the previously mentioned relationship.Other bibliometric studies only study the progress of publications made by specific financial journals, [26][27][28][29] so studying economic growth, in the way it has been delimited throughout the introduction, is an opportunity, along with delimiting it to stock markets.for which there is also a lack of literature. Given the fact that there is a serious lack of literature associated with understanding how R&D, in terms of the number of articles published, achieves economic growth, the main objective of this research is to determine the impact of R&D spending in terms of scientific production related to stock markets, specifically number of articles published per each studied country, and its correlation with their economic growth, not only for emerging countries, but also for developed ones.The current bibliometric study becomes highly relevant for six reasons.14] Second, it encompasses not only the phenomena for emerging countries, [4] but for developed ones. [6]Third, the research fits within the United Nations' SDGs. [3]Fourth, it focuses on the number of articles published related to stock markets, topic that affects economic growth. [7]Fifth, the bibliometric type variable associated with the production of scientific articles ended up being statistically relevant, as will be shwon in the discussion section.Sixth, bibliometric studies are highly relevant to address the understanding of this type of problems. [30] METHODOLOGY The data search was conducted on February 7 th , 2023, through the Web of Science Basic Search, targeting only stock market articles that could relate or promote economic growth.Narrowing the search to this kind of articles is important because the research does not pretend to establish a correlation between the publication of any kind of article with economic growth, but those articles resulting from the R&D effort and that could lead more probably to economic growth. [7]In order to capture all those relevant articles associated to stock markets, a standardized query was identified among relevant bibliometric literature. [31]Four criteria were prioritized for the search.First, following Khan et al. 's query configuration, [31] the keywords searched to target stock market articles were as per the query below: "bank*" or "finance" or "financial market*" or "financial*" or "stock market*" or "stock indices" or "stock index" or "corporate finance" or "stability" or "liquidity" or "credit" or "asset pricing" or "bitcoin*" or "cryptocurrency*" or "risk*" or "governance" or "sukuk" or "takaful" or "islamic finance" or "islamic bank". Although the previous search query has redundancies among its different terms, its configuration was respected, since the redundancy of terms, due to the "or" connector, should not influence the results considering that the data based used was the same (Web of Science), [32] in addition to representing a good start as a search criterion documented in the literature.Second, based on a previous test search, results associated to Web of Science Categories Business, Business Finance, Economics and Management go as far as 1985.In order to study the evolution of publications throughout time, the time horizon configured for the search was 1985 to 2022, being 2022 the last full year in order to preserve comparison among the full years analyzed.Third, since research articles are the main object of study, the document Therefore, not only a dummy variable was considered in Model 1, but two additional multiple linear regressions were conducted, one exclusively for developed countries and the other exclusively for emerging countries.This second multiple linear regression is shown below as Model 2, regression that was run twice, once for emerging countries and once for developed ones.It is expected that for the three regressions the adjusted R 2 will be relevant and that the coefficients will be statistically significant, although it is also expected that the impact of the investment in R&D will be higher for emerging countries than for the developed ones. [4,6]Among the listed countries, there are 14 developed countries and 6 emerging countries.Nevertheless, since the variable economy size n-1 requires data corresponding to continuous and uninterrupted years (in order to estimate the next year's economy size), three countries had to be discarded from the study due to intermittent availability of information: Australia, Iran and Switzerland.This adjustment left behind the 17 remaining countries classified as 12 developed countries and 5 emerging countries.The 17 countries remaining represent 77.60% of the world's published articles in terms of clusters and 80.70% of the world's published articles in terms of authorships. [34] RESULTS According to the delimitations described above, there were 524 entries remaining.Each entry represents one country in one year, with an economy size that corresponds to that year (dependent variable), and a specific number of articles published related to stock markets divided by the corresponding GDP growth for that year (independent variable).For each entry, the dummy variable was defined as 1 for emerging countries and 0 for developed ones. Although the total amount of articles is an input used to calculate one of the independent variables and not an independent variable for itself, it is also shown in Table 1 for reference matters. Table 2 shows the results for the multiple linear regression that considered the 524 entries listed in Table 1.This regression points to understand the behavior of the variables for the entire time horizon from 1985 to 2021, whether the countries analyzed are developed or emerging.Independent variable 1 corresponds to Articles published n /R&D spending (% of GDP) n , while Independent variable 2 refers to the dummy variable, where 1 is emerging country and 0 is developed country.The adjusted R 2 for this regression is 0.2126.Table 3 shows the results for the multiple linear regression run only for developed countries (which correspond to those countries listed in Table 1 except for China, Turkey, Brazil, India and Russia), while Table 4 shows the results for the regression that corresponds to emerging countries only (China, Turkey, Brazil, India and Russia, or type for the search was configured as "Article".Fourth, the main Web of Science Categories associated in disciplinary terms with economic growth are Business, Business Finance, Economics and Management. [33]Therefore, the search was narrowed based on these specific categories. Regarding the countries studied, the research was delimited to the main twenty countries associated with scientific production in terms of number of published articles.According to Abramo et al., [34] Economy size n-1 represents the size of the economy of the country studied during the previous year, considering that for the most distant year (or initial year of study) the size of the economy is assumed to be 1, and subsequently it will be the percentage resulting from the compound growth of the economy through the gradual accumulation of GDP growth rates.The GDP growth rate n represents the percentage rate at which each economy grows or shrinks.The GDP growth rates for each year studied were extracted from the World Bank database. [35]Articles published n correspond to the number of articles published per year, and R&D spending (% of GDP) n reflects the effort of each country investing in R&D as a percentage of its GDP, for each year studied.The investments in R&D as a percentage of GDP were extracted from the OECD database. [36]Country type is the dummy variable that corresponds to 1 if the country studied is an emerging country, or 0 if the country studied is a developed one.In order to maintain comparability, since some countries did not have registered data related to GDP growth and/or R&D expense for every year, the study goes up to the year 2021. Based on the fact that there are relevant differences in terms of scientific production and budgets destined specifically for research and development in developed versus emerging countries, [37] the second research question is the following: to what extent the number of articles published per 1% of investment in R&D in terms of GDP correlates to the size of the economy of developed countries comparatively to emerging countries?starting on an adjusted R 2 of 0.60.Therefore, only the regression shown in Table 4 would have a statistically significant adjusted R 2 .Nevertheless, it is important also to highlight that, in all cases, coefficients were statistically significant at a 99% confidence level.Considering the previous facts, the results for the first regression (524 entries) and the first research question, the coefficient of the number of articles published per 1% of investment in R&D in terms of GDP show a statistically relevant correlation to the size of the economy of each country, although the strength of the model is not as high as expected, with an adjusted R 2 of 0.2126.It is also relevant to point out that, since the coefficients were statistically relevant, then the model can be considered as highly dispersed, although valid in terms of the relationship between the variables in a highly dispersed environment.That is, landed in the context that the model intends to explain, the range of possibilities in terms of the size of the economy (dependent variable) versus the productivity of investment in R&D expressed per 1% of investment in GDP (independent variable 1) and the fact that the country is a developed or emerging one (independente variable 2) is wide, but there is still a relationship between the variables with coefficients that aim to explain said relationship.BRIC + Turkey).The total entries analyzed per regression were 389 and 135 respectively (totaling 524).Independent variable 1 is Articles published n /R&D spending (% of GDP) n .For these particular cases, the dummy variable was no longer needed since developed (Table 3) and emerging (Table 4) countries were treated separately.The adjusted R 2 for each regression is 0.1006 and 0.6164 respectively.Regarding the three regressions, all of the independente variables plus the constants were statistically significant at a 99% confidence level. The two research questions stated were the following: Question 1: to what extent the number of articles published per 1% of investment in R&D in terms of GDP correlates to the size of the economy of each country?Question 2: to what extent the number of articles published per 1% of investment in R&D in terms of GDP correlates to the size of the economy of developed countries comparatively to emerging countries?It was expected that, for all cases, the adjusted R 2 s would be statistically relevant.According to Hair et al., [38] this can be assumed economic growth. [40]However, since the correlated dependent variable corresponds to the size of the economy expressed as the accumulation of GDP growth rates, this means that the relationship no longer revolves only around annualized growth, but rather the cumulative effect over the years and reflected in the whole size of the economy.The fact that this is relevant in terms of the coefficients only confirms that the productivity of investment in R&D remains not only relevant, but for both developed and emerging countries.This can be evidenced also through China, whose scientific production has grown rapidly, [41] as well as its economy. [35]However, the consistency of the regression in terms of the adjusted R 2 for emerging countries suggests that the effect over economic growth in these countries is much more relevant, and therefore it would be reasonable to think that emerging countries will benefit marginally more from the investment in R&D, that developed countries, considering the size of their economies nowadays. Investment in R&D remains relevant today, not only supported by the fact that it leads to the innovation that the United Nations [3] associates with economic growth (SDG 8), but there is now specific evidence that investment in R&D, and particularly that measured in terms of its productivity over the generation and publication of more scientific articles, correlates with the size of the economy and its future progressive growth.As the main finding of the research, the scientific articles generated and published per each 1% of the investment in R&D in terms of GDP offer a statistically strong enough correlation in order to be used as an indicator of R&D investment efficiency towards economic growth.This is particularly important for emerging countries, where there is a greater opportunity to see the benefits of R&D investment in terms of the size of their economies. Considering that the coefficient of the independent variable for the regression of developed countries is much lower than the coefficient of the independent variable for the regression of emerging countries (0.0005 versus 0.0093 respectively), the effect on economic growth is greater due to greater investment in R&D and greater scientific production.This makes it more attractive for emerging countries to invest in research and development due to the greater impact that said investment would have on the growth of their economies, which suggests that the governments of these countries should undertake policies aimed at supporting R&D through attraction of companies, or supporting efficient companies in terms of R&D. [23,24]garding the second question, the interpretation of the results is almost the same as for the first question for the multiple linear regression of developed countries (389 entries).Nevertheless, in the case of the multiple linear regression of emerging countries (135 entries), not only coefficients are statistically significant, but the adjusted R 2 can be considered as statistically valid as well.Therefore, emerging countries not only show relevant correlation between the articles published per 1% of investment in R&D in terms of GDP and the size of the economy than that shown by developed countries, but the strength of the model is high as previously expected. DISCUSSION Although GDP as a measurement of economic growth has limitations, [8] it still remains as the most common and appropriate way to measure it. [2,9]The methodology of this research considered GDP as part of the regression models in two specific locations: one of them as a dependent variable, specifically in terms of accumulated economic growth; and on the other hand as part of the main independent variable, constituted by the number of articles published divided by the percentage of GDP invested in R&D.Beyond the results obtained in the regressions, by itself considering cumulative economic growth as an approximation to the size of an economy is an innovative perspective compared to the existing literature.In addition, the main independent variable, being posed in the way it is, invites us to interpret it as a novel indicator of efficiency in terms of R&D effort.The investment made in R&D should yield results, and these should be reflected in the scientific production generated and measured as the number of publications, since epistemologically speaking knowledge would not exist if it is not published. [39]Therefore, Articles published n /R&D spending (% of GDP) n represents an important addition to the understanding of the efficiency of the investment in R&D. Herzer [4] anticipated that R&D spending had a positive effect over TFP, but being R&D expressed as a percentage of GDP as it usually is in the current literature. [5]Through the regressions run as part of this research, there is now complementary evidence that states that not only higher investment in R&D promotes a higher economic growth, but also a higher number of scientific articles produced in relative terms with the R&D spending as a percentage of GDP also leads to a higher economic growth.Specifically, scientific articles produced about stock markets, which is a discipline whose results, if implemented, promote CONCLUSION AND RECOMMENDATIONS According to the results and the discussion, it can be concluded that R&D is essential for the economic growth of countries.This correlation turns out to be particularly important so that emerging countries, through an increase in scientific production as a result of investment in R&D, can achieve higher levels of economic development.Scientific production, measured through the number of articles published for every 1% investment in R&D, turned out to be an indicator strongly correlated with accumulated economic growth, so it makes sense that emerging countries, which would benefit the most from measure of this relationship, dedicate greater efforts to R&D, with the objective that their economies accelerate their growth.It is highly recommended that the scientific community delve into the use of this new indicator of efficiency of investment in R&D to measure the productivity of said investment, in the framework of responsible research and innovation. [22]It is also recommended that governments and different corresponding institutions in emerging countries begin to measure their productivity in the generation of scientific articles in order to appropriately allocate their investment in R&D. size ) * (1 GDP growth rate ) R & D spending (% of GDP) these countries are United States, China, United Kingdom, Japan, Germany, Italy, France, Spain, Brazil, Canada, India, Australia, Russia, Netherlands, South Korea, Turkey, Poland, Iran, Sweden and Switzerland.According to the authors, 82.20% of the world's published articles in terms of clusters and 85.40% of the world's authorships are associated to those countries as the main country of origin.The main research question is the following: to what extent the number of articles published per 1% of investment in R&D in terms of GDP correlates to the size of the economy of each country?The multiple linear regression named Model 1 shows the variables involved. Table 2 : Results of the first multiple linear regression (524 entries). *Statistically significant at a 99% confidence level. Table 3 : Results of the second multiple linear regression (developed countries, 389 entries). *Statistically significant at a 99% confidence level. Table 4 : Results of the third multiple linear regression (emerging countries, 135 entries). *Statistically significant at a 99% confidence level.
5,348
2023-11-30T00:00:00.000
[ "Economics" ]
Deep Residual Network for Identifying Bearing Fault Location and Fault Severity Concurrently Fault diagnosis is composed of two tasks, i.e., fault location detection and fault severity identification, which are both significant to equipment maintenance. The former can indicate where the defective component lies in, and the latter provides evidence on the residual life of the component. However, traditional fault diagnosis methods, like the time-based methods, frequency-based methods and time-frequency-based methods, can only achieve one goal every time. They are not able to produce highly representative features for dealing with above-mentioned two tasks simultaneously. In addition, there is a huge increase in the amount of monitoring data of equipment. There is urgent need for handling this massive data, obtaining highly discriminative features, and further producing accurate diagnosis results in the field of fault diagnosis. Aimed at these problems, a deep residual network based on multi-task learning is proposed, taking detection of fault location and judgment of fault severity into account simultaneously. This network is fed with two kinds of diagnostic information, which is helpful to mine the potential links between two tasks of fault diagnosis and generate very representative features. Moreover, based on maximizing activation value, a visualization method of role of deep neural network is proposed. It can break in the traditional way of using deep neural network as black box. A real bearing experiment validates that the proposed method is reliable and effective in bearing fault diagnosis. I. INTRODUCTION Bearing is widely used in many rotating machines and plays a significant role in them. According to statistics, there are about forty-five percent of the machinery faults that are generated by bearings [1]. In order to ensure the normal operation of the equipment, there are many sensors installed on them for monitoring their operation status, which results in a surge of the amount of data. The data brings a lot of obstacles and challenge, but also excellent research material. Many researchers have been taking much consideration on the issue of handling massive status monitoring data. They mainly focus on the methods of the production of highly The associate editor coordinating the review of this manuscript and approving it for publication was Wei Wei. representative features that can be applied to accomplish fault diagnosis tasks with high accuracy [2], [3]. Due to vibration data of bearing containing rich information of their operation status, the vibration-based feature extraction methods have been being attracted much attention [4]- [6]. They could be classified into two categories. One is traditional and classical feature extraction methods based on signal processing, another is modern intelligent feature extraction methods. They all made great achievements in the past decades. The former is further divided into three sub-classes. The first is time-domain statistics features, such as root mean square (RMS) [7], kurtosis [8], and so on. The second is frequency-domain characteristics. The Fourier frequency transform (FFT) [9] can be seen as representative. The third is time-frequency-domain features. For example, short time Fourier transform (STFT) [10], Wigner-Ville distribution (WVD) [11], empirical mode decomposition (EMD) [12] and wavelet transform [13], [14] are all classical feature extraction methods in this kind of category. However, the above-mentioned methods are aimed at single component, single type of fault, and single short time series data. They are not good at dealing with massive data and multiple components. Then the modern intelligent feature extraction methods have been being developed rapidly. The artificial neural network (ANN) [15], [16] and sparse coding (SC) [17], [18] are the typical representatives. Sparse coding can be seen as one kind of ANN with shallow structures. In essence, they are all data-driven methods. The fault diagnosis issue is converted to pattern recognition problem by them. Though these methods did work in intelligent fault diagnosis of rotating machinery with massive data, they still have two shortcomings: the first is that the effectiveness of these methods is heavily dependent on input features that are designed by professional researchers or experts manually. The second is that the capacity of them is limited in handling complex non-linear problems of fault diagnosis since they just have shallow structures. Being against with these deficiencies, a promising feature extraction method named deep learning (DL) [19] has been being developed in recent years. Deep learning is one of the frontiers and research hotspots in machine learning or artificial intelligence, which is represented as deep neural network (DNN). It enhances the ability of traditional ANN by adding many non-linear mapping layers, since the phenomenon of gradient vanishing is restrained greatly. Now, the DNN has made great success in many domains, such as image processing [20], [21], audio classification [22], [23] and other domains [24]. It also includes fault diagnosis [25]- [29]. From these case studies of fault diagnosis, it can be known that the DNN used in them have two deficiencies. The one is that these constructed DNNs can only deal with single issue of fault diagnosis which contains two aspects, i.e., fault location detection and fault severity identification. Another is that these DNNs are used as ''black box'' in most of the research cases. In other words, the features extracted by these DNNs cannot be clearly understood. It brings many obstacles to subsequent researchers. They cannot clearly understand why the features extracted by these DNNs perform well, and how to improve these models later. Aimed at these problems, a deep residual network (DRN) based on multi-task learning and generic visualization method of DNN's role is proposed. The former can be applied to handle the above-mentioned two tasks of fault diagnosis synchronously. It has two output layers that are against with different task. Those tasks share partial weights parameters, which is helpful to reduce complexity of networks and mine the relationship between the two tasks. In other words, this network is easier to learn more intrinsic features that can characterize the essential expression of bearings fault, since it is fed with much more knowledge, e.g., information of fault location and fault severity. Meanwhile, in order to help researchers understand why the proposed deep residual network works, this paper puts up a visualization method of network role. The core of this method is that the activation value reflects the response of target neuron to the input. This method can find sensitive input pattern of any neuron in input feature space where human could understand intuitively, based on the maximization of activation value. The main contributions of this paper are summarized as follows: (1) A deep residual network based on multi-task learning is proposed to handle bearing fault location and fault severity judgment simultaneously. It can produce highly discriminant feature representation and is helpful to mine the potential links between these two tasks of fault diagnosis. (2) A mathematical model of visualizing deep neural network is constructed on the basis of maximizing activation value, breaking in the traditional way of using deep neural network as ''black box''. (3) A real bearing experiment verifies the effectiveness and reliability of the proposed methods. The experimental results also reveals that the frequency and energy information of resonance zone of vibration signal is significant to bearing fault diagnosis. The rest of paper is organized as follows: Section 2 introduces the construction of the proposed deep residual network on the basis of multi-task learning. Section 3 deduces generic visualization method of the DNN. Section 4 confirms the effectiveness of the proposed network model and visualization method by a real bearing experiment. The conclusion is drawn in Section 5. A. DEEP RESIDUAL NETWORK FOR FEATURE EXTRACTION Inspired by the works [30], [31], a novel deep residual network that is aimed at one-dimensional time domain vibration signal is constructed. After all, bearing vibration signals contain much operation status information, and the occurrence of bearing fault could be reflected in it. Many vibration-based methods have verified this point of view [32]- [34]. Therefore the raw time-domain vibration signal is used as network input. Fig. 1 shows the overall structure of the proposed deep residual network. This network is designed by obeying general CNN construction principle, i.e., (1) the backbone is made up of alternately stacking convolution layers and pooling layers; (2) and the tail layer is composed of fully-connected (FC) layers that are applied to project the extracted feature representation into class label information. It is worth mentioning that the network input is just raw vibration signal, without any signal preprocessing or handcrafted features. The consecutive 27 convolution layers are employed to do hierarchical feature learning, followed by one global average pooling layer (GAP) and two fully-connected layers. One FC layer corresponds to bearing fault location; the other is against bearing fault severity. The reason why the convolution is used as sub-module is that the input vibration signal is considered to be local stationary. In other words, the extracted local features that are calculated in one shorttime window of original signal are also suitable in other window regions, for one convolution kernel. The head layer of the proposed network is designed specially since pre-activation block employed [31]. The middle part contains 12 residual blocks, and each block includes two batch normalization (BN) layers and one dropout layer. The residual structure makes gradient information easily propagated to more front layer, and further makes network deeper and easier to be trained [30]. Batch normalization method can also weaken the problems that deep neural network is hard to be trained and the training of network converges slowly. It speeds up network training through achieving a stable distribution of activation values and allowing higher learning rate, since the internal covariate shift that is caused by the change of network parameters is reduced [35]. The core of this method is stated as follows: the symbol χ denotes one mini-batch data in one training step, i.e., χ = {x 1 , x 2 , x 3 , . . . , x m }. The output of batch normalization layer is computed by: where γ and β are learned parameters; x i is calculated through: From (1) to (4), it can be found that batch normalization can also work as a way of data augmentation, which is able to prevent over-fitting to a certain degree. In the training process of network, a training sample is randomly taken into account with other samples in one mini-batch. The training network no longer produces deterministic values with regard to a training sample, since one training sample constitutes different mini-batch with different other samples and each mini-batch have an directly effect on the learned parameters γ and β. This is equivalent to bringing disturbances into samples data, which means training samples are augmented. Dropout layer is also introduced into the proposed deep residual network, so as to explicitly avoid over-fitting problem. The core idea is to randomly make some neurons deactivated at a certain probability p. It means that partial connections between two adjacent layers are cut off. This technology can be considered as sampling ''small'' network from original big network, and only optimizes ''small'' network during training stage. In the stage of testing inference, the dropout is no longer used. Then, the whole network can be seen as the average result of many deep residual networks that share same network structure, which is very similar to the ensemble learning thought. This technology is able to largely improve generalization ability of DNN, outperforming many other regularization methods in many research cases [36]. The detailed procedures of dropout method are as follows: The symbol a (l) denotes the vector of activation value in layer l. Then, in the training period, the input vector i (l+1) is calculated through: where denotes element-wise product and r (l) is the vector of independent Bernoulli random variables. B. MULTI-TASK LEARNING Though we can optimize a corresponding model for each diagnosis task to get acceptable result, some information that comes from related task and helps to improve generalization ability of model may be neglected. By sharing partial or whole feature representation, the performance of fault diagnosis model can be generalized well on the homologous task. Considering that there is a certain connection between two tasks of fault diagnosis, that is, the detection of fault location is related to fault characteristic frequency of resonance demodulation spectrum, while the evaluation of fault severity is connected with the energy of resonance zone of vibration signal. The deep residual network based on multitask learning is built up. Multi-task learning enhances model generalization ability by utilizing domain-specific information involved in training signals of related tasks. It can be seen as a way of inductive transfer that introduces an inductive bias into a model to improve its performance and makes model prefer some hypotheses over others. In this research, hard parameter sharing is applied. It is one of multi-task learning approaches for deep neural network. As shown in Fig. 1, the proposed network shares hidden convolution layers between two tasks and keeps one task-specific FC output layer. This network is aimed to deal with these two different but associated fault diagnosis tasks simultaneously. With regard to the detection of fault location, the cost function is expressed as follows: where the subscript L represents the meaning of location; θ L is the weight parameters among the feature-shared layer and corresponding output layer of fault location task; m and k L are the number of samples and fault location type respectively. 1{x} denotes indicative function; P(y (i) = j|x (i) ) is the probability of i − th sample belonging to j class conditioned on giving feature x (i) . R(θ L ) is the regularization term of parameters θ L , and λ L is the weight factor. As for the task of evaluation of fault severity in here, it is converted to the classification problem. The fault severity of bearing is divided into multiple levels. Then, the cost function of this task is denoted as follows: where the subscript S represents the meaning of severity. This loss function is similar to (7). Then, the overall optimization object of the proposed whole network model is computed as follows: The symbol α is used to weigh the importance of these two tasks. From the (9), it is clear that this network is aimed to optimize two tasks simultaneously. This network is fed with more knowledge, which makes it learn more intrinsic and representative features. It is worth pointing out that hard parameter sharing largely weakens the risk of over-fitting. It is demonstrated that the risk of over-fitting the shared parameters is an order N [37]. The symbol N denotes the number of tasks. Intuitively, the more tasks the model is aimed at learning concurrently, the more general representation the model need to learn among all tasks, and the less is chance of over-fitting on original task. The proposed network is helpful to mine the potential links between two tasks of bearing fault diagnosis. III. VISUALIZATION OF DEEP NEURAL NETWORK In most research cases, DNN is used as ''black box''. The features in hidden layers could not be understood intuitively, and the reason why they perform so well is not clear. They are processed via multiple non-linear mapping layers from the input layer to the current layer. This work makes human not to comprehend the abstract and high-dimensional features in higher layers. Considering this problem, a visualization method of role of deep neural network is proposed on the basis of activation maximization. That is, the approximately optimal pattern is calculated in the input feature space, which makes the given unit or neuron have the largest activation value. A. MATHEMATICAL MODEL In this research, the visualization of what a neuron captures in FC layers is taken as an example, which can help to mine the link between two tasks of fault diagnosis. This visualization method also applies to any hidden unit in DNN. With regard to one hidden unit h (j) i in the j − th layer, the activation value a (j) i represents its response status to the specific input. Since the activation function used in the proposed deep residual network is rectified linear unit. The larger the value is, the more sensitive to this input stimulus the hidden unit is. Then, when the neural network is trained over, the feature patterns extracted by this neuron can be obtained by solving what input pattern can make its activation value largest. In other words, we can solve the following optimization problem to find out the most sensitive input pattern of this neuron in the input feature space: where x * indicates the optimal solution in the input feature space. The symbol E represents the maximum energy of all vibration signals in the training set, which limits the searching space of solution. On the basis of Lagrange multiplier method, the above-mentioned optimization problem can be converted to extreme value problem of the following Lagrange function: where a (j) i (x) represents the activation value with regard to input vibration signal x, and the symbol λ denotes the Lagrange coefficient. The Kuhn-Tucker (KT) requirements of (11) are ignored, but this transform indeed works well in practice. B. OPTIMIZATION METHOD From the (11), it is obvious that the function R(x, λ) is highly non-convex. Therefore it is almost impossible to obtain the optimal solution. As such, the numerical optimization algorithm named gradient descent (GD) is employed to search the approximately optimal solution. Its process often includes three steps, i.e., initializing solution randomly, calculating gradient towards to the parameters and updating parameters. Obeying generic steps of GD, we unearth that there is a big difference among the approximately optimal solutions when each of them is initialized randomly. The consistency of solution is quite poor, which makes researchers puzzled to comprehend what the target unit captures. This phenomenon is caused by the non-convexity of function R(x, λ). The solution is easily limited to local minimum. Aimed at this problem, a natural initialization method of optimal solution is proposed. That is, any sample in training set could be seen as a good initial value of it. This initialization method is based on the following proposed hypothesis: (1) The features in any hidden layer of deep neural network is trained and learned from the training set. (2) From the view of input feature space, the features extracted in hidden layer can be considered as the whole or partial feature expression of training samples. The above assumptions limit the searching range of the optimal solution. The result of subsequent experiment demonstrates the aforementioned hypothesis, and it also validates the effectiveness of initialization method. The experimental result shows that the proposed initialization method makes the feature expression of approximately optimal solution more stable. IV. EXPERIMENTAL VERIFICATION A. EXPERIMENTAL SETUP To investigate the effectiveness of the proposed method, an experiment is carried out on a defective bearing dataset that is from Case Western Reserve University Bearing Data Center [38]. It is the only open public data set that supports bearing fault type and bearing fault severity simultaneously. As shown in Fig. 2, the experiment rig is mainly composed of five parts, i.e., a 2 hp motor, a torque transducer, a dynamometer, a load motor and tested bearings. The defective drive end bearings are seeded with single-point faults at the outer raceway, inner raceway, and the rolling elements by using electro-discharge machining (EDM) technology. The fault diameters include four categories, i.e., 7 mils, 14 mils, 21 mils and 28 mils (1mil = 0.001inches). The first three types are selected as test subjects in this research. The type of tested bearings is deep groove ball bearing 6205-2RS JEM SKF. Vibration data is collected by an accelerometer with the sampling frequency being 12kHz. B. DATASET PRODUCTION In order to train the proposed deep residual network well, a simple data augmentation method is introduced. That is, each continuous acquisition of the raw vibration signal is divided into multiple segments with allowing for fifty percent overlap. For example, one vibration signal with 1535 points could be broken up into two samples whose length is 1024 points. Note that there is no intersection of sampling point between training set and test set. In general, there are four types of bearing fault location, i.e., healthy state (HS), outer race fault state (ORFS), inner race fault state (IRFS) and rolling element fault state (REFS). Each of the latter three has three levels of defective severity (7 mils, 14 mils, 21 mils). Therefore, the bearing health status can be divided into ten classes. Detailed information of the training set and test set is provided in Table 1. Fig. 3(a) shows the typical vibration signal of each class. C. RESULTS OF FAULT DIAGNOSIS AND NETWORK VISUALIZATION Based on the principle that is illustrated in Section 2, the proposed deep residual network is constructed. The data length of each input vibration signal is equal to 1024. Max-pooling (MP) operation is applied in the pooling layer. The size of all convolution kernels is 16. Through every other residual block, the input signal is down-sampled with ratio being 2. The activation function is selected as rectified linear unit. The number of neurons is set to 4 in each fully-connected layer. They correspond to 4 different fault locations and 4 different fault severities, respectively. This network is established and programmed by deep learning framework named Pytorch, which is one of the most famous deep learning toolbox and proposed by Facebook. The proposed deep residual network is trained on the bearing training set by the back propagation (BP) algorithm [39]. When the model is trained and optimized over, the weight parameters among all the layers are obtained. Then, the features with rich semantic information and discriminant information can be calculated in the global average pooling layer of proposed deep network, by the use of network inference. This model achieves high accuracy in both task of fault location detection and task of fault severity judgment. The diagnosis accuracy is listed in Table 2 and Table 3 in detail. Fig. 3(b) shows the typically feature of each class. From this figure, it can be seen that these features are hard to be comprehended by researchers even though there are some differences between them. Then, the visualization of what the neurons in FC layers capture goes ahead. The original vibration signals that are depicted in Fig. 3(a) are selected as the initialization values. On the basis of (11) and GD, the sensitive input patterns of the selected neurons are obtained. Fig. 4 depicts them in the input feature space. The result is amazing. It is obvious that the information of fault characteristic frequencies and energy of resonance zone are retained fair well while the noise is suppressed heavily. For example, the sensitive input pattern of the neurons that are related to normal health status is very similar to DC signal with zero mean. The sensitive input signal of neurons that are relevant to slight ORFS behaves that the local disturbance between adjacent transient impulses is almost completely restrained and eliminated. From this result, it can be known that the frequency and energy information of resonance zone of vibration signal play a crucial role in the real fault features. The result also demonstrates that the features extracted by the proposed deep residual network indeed characterize the intrinsic feature of bearing fault. To further investigate the effectiveness of the proposed method, a hierarchical diagnosis network (HDN) [25] based on deep belief network (DBN) is employed for comparison. It is also one kind of DNN, and is a cascade network stacked by multiple DBNs. Through literature review, this network is one of the few models that can simultaneously accomplish two tasks of fault diagnosis. It diagnoses fault location and fault severity sequentially. It firstly gives the result of fault location by the previous one DBN, and on this base, the result of fault severity is produced by the latter three DBN models, each of which corresponds to one kind of fault location type. HDN is built by using the same network parameters used in Ref. [25], except that the length of each raw vibration signal is shortened to 1024 points. The input feature of HDN is the wavelet packet energy (WPE) that is calculated from 4-level wavelet packet transform decomposition. The diagnosis result of HDN is listed in Table 2 and Table 3. At the same time, the proposed model with only one task of fault diagnosis is explored while other parts of the network remain unchanged, so as to explore the effectiveness of multi-task learning. For the task of fault location detection, it is trained with only one loss function J L . And for the rest, i.e., the task of fault severity evaluation, it is trained with single loss J S . The results of them are also listed in Table 3 in the names of DRN-L and DRN-S respectively. Obviously, four models achieve comparative performance, while the proposed DRN performs best with 98.9% accuracy in task of fault severity evaluation compared with HDN (98.4%) and DRN-S (97.8%). The result reflects two messages. On the one hand, the proposed deep residual network is also effective in extracting fault-related features from original raw vibration signal, compared with HDN that employs handcrafted features. On the other hand, the DRN outperform DRN-S, which indicates that the former is easier to learn more intrinsic features of bearing fault by feeding more knowledge. The loss of fault location task works as a good regularization term for the fault severity task. Overall, the DRN can be consider as a comprehensive and precise bearing fault diagnosis model by both detecting fault location and judging fault severity. At last, to study the generalization capability and the robustness to noise of the proposed DRN model, each sample in test set is added with additive Gaussian white noise in different signal-to-noise ratio (SNR), while the DRN is trained on the original training set without additive noise. This situation is very close to real industrial production where the noise varies a lot. After all, not all the labeled training samples can be obtained under different noisy environment. Fig. 5 shows the diagnosis accuracy of DRN in different SNR. By observation, the DRN model achieves pretty high accuracy with more than 90% accuracy in two tasks when SNR is larger than 0db. The SNR with 0db means that the power of noise is equal to that of original signal. The result demonstrates that the DRN model has pretty good robustness against noise. In the real industrial environment, the performance of DRN model could be improved with the increase of data amount. V. CONCLUSION This paper proposes an intelligent diagnosis system for bearings based on deep residual network. The input features are just raw time-domain vibration signal, not the hand-crafted or pre-defined features by experts. The proposed network combines the loss function of fault location task and that of fault severity task, fed with more knowledge, which makes it extract more intrinsic features of bearing fault. The features extracted by this network can be used in both fault location detection and fault severity evaluation. In this manner, the weak link between the two tasks could be discovered. The subsequent visualization of sensitive input pattern indeed reveals that the frequency and energy information of resonance zone of vibration signal is significant to bearing fault diagnosis. To further deepen the understanding of this network and break the traditional way of using deep learning as ''black box'', the visualization method of higher-layer features is proposed on the basis of maximizing activation value. In this process, one natural and elegant initialization rule of the optimal solution is raised. It helps the researchers comprehend what the hidden units capture in the input feature space. In addition, to provide a comprehensive evidence for the effectiveness of DRN, one other intelligent diagnosis method named HDN is employed for comparison. Meanwhile, the DRN with only one task loss, i.e., DRN-L and DRN-S, are also used as baseline models. As a result, the DRN achieves comparable or even better results in different fault diagnosis tasks. It demonstrates that DRN has a good potential for bearing fault diagnosis with two tasks. In future work, DRN may be combined with other fault information to improve its performance of fault diagnosis. GUANGHUA XU (Member, IEEE) received the B.S., M.S., and Ph.D. degrees from Xi'an Jiaotong University, China, in 1995, all in mechanical engineering. He is currently a Professor with the State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University. His current research directions are in mechanical system reliability, fault diagnosis, and wavelet analysis. He is currently a Professor with the School of Mechanical Engineering, Xi'an Jiaotong University. His research interests include machine learning, pattern recognition, condition monitoring and fault diagnosis, and automatic control. QINGQIANG WU received the B.S. degree from Xi'an Jiaotong University, Xi'an, China, in 2013, where he is currently pursuing the Ph.D. degree with the State Key Laboratory for Manufacturing System Engineering. His research interests includes deep learning, computer vision, rehabilitation robot, and signal processing. VOLUME 8, 2020
6,748
2020-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Amplified and quantum based brain storm optimization algorithms for real power loss reduction Received Jan 3, 2020 Revised Feb 19, 2020 Accepted Mar 3, 2020 In this work amplified brain storm optimization (ABS) algorithm and quantum based brain storm (QBS) optimization algorithm is applied to solve the problem. A node is arbitrarily chosen from the graph as the preliminary point to form a Hamiltonian cycle. At generation t and t+1, Lt and Lt+1 are the length of Hamiltonian cycle correspondingly. In the QBS algorithm a Quantum state of an idea is illustrated by a wave function ( ⃗ ) as an alternative of the position modernized only in brain storm optimization algorithm. Monte Carlo simulation method is used, to measure the position for each idea from the quantum state to the traditional one. Proposed ABS algorithm and QBS optimization algorithm has been tested in standard IEEE 57 bus test system and real power loss reduced effectively. INTRODUCTION In this work minimizing true power loss is the main objective of the problem. A variety of methods [1][2][3][4][5][6] have been applied to solve the problem. Subsequently various evolutionary methods [7][8][9][10][11][12][13][14][15][16] applied to solve the problem, in that many algorithms stuck in local optimal solution In this work amplified brain storm optimization (ABS) algorithm and quantum based brain storm (QBS) optimization algorithm is used for solving optimal reactive power problem. Brain storm optimization (BSO) algorithm gets trapped into local optima when applied to different optimization problems. In the mathematical field of graph theory, a Hamiltonian path is a path in an undirected or directed graph that visits each vertex exactly once. In the proposed algorithm Hamiltonian cycle will improve the explore abilities and also stay away from local optimal solution. In QBS algorithm completely, the mechanism of quantum behavior, which causes uncertain of every idea lead to a superior capability to bounce out of the local optimal solution. Proposed ABS algorithm and QBS optimization algorithm has been tested in standard IEEE 57 bus test system. AMPLIFIED BRAIN STORM OPTIMIZATION ALGORITHM BSO [17] gets trapped into local optima when applied to different optimization problems. In the projected amplified brain storm optimization algorithm Hamiltonian cycle has been applied to improve the search abilities and also to avoid of trap in local optimal solution. A node is arbitrarily chosen from the graph as the preliminary point to form a Hamiltonian cycle. At generation t and t+1, L t and L t+1 are the length of Hamiltonian cycle correspondingly. Their ratio r at generation t(r t ) can be described as: Hamilton cycle algorithm as follows: Step 2: is chosen and is picked with least weight linking , then the is obtained. Step 3: when i+1<n, subsequently i+1 is used to substitute i, and revisit to Step 2; condition not occurred , then revisit to the final Hamiltonian cycle then go back to Step 4. Step 6: compute the extent of the Hamiltonian cycle C. End for i In the proposed amplified brain storm optimization (ABSO) algorithm Hamiltonian cycle will improve the explore abilities and also stay away from local optimal solution. Commence Step 1: "n" potential solutions are arbitrarily engendered Step 2: "n" individuals are clustered into "m" clusters Step 3: "n" individuals will be appraised Step 4: In every cluster rank the individuals then the most excellent individual's are recorded as cluster center Step 5: Between 0 and 1 arbitrarily a value will be engendered; If the value is smaller than a probability; then i. a cluster center has been Arbitrarily chosen; ii. To swap the certain cluster center arbitrarily engender an individual Step 1: node v1 chosen as initial point, Step 2: is chosen and is picked with least weight linking , then the is obtained. Step 3: when i+1<n, subsequently i+1 is used to substitute i, and revisit to Step 2 Step 4: for all i and j in cycle Step 5: C is substituted by C1, and revisit Step 4. Step 6: compute the extent of the Hamiltonian cycle C. Step 7: when "n" new-fangled individuals are engendered, then go to Step 8; or else go to Step 6. Step 8:end conditions met ; or else go to Step 2. End QUANTUM BASED BRAIN STORM OPTIMIZATION ALGORITHM In BSO algorithm population is indicated as swarm moreover every individual is described as an idea. Originally, every idea is arbitrarily initialized inside the exploration space. Subsequently most excellent one in every cluster is selected as the cluster centre. Sporadically, an arbitrarily chosen centre is swapped by a recently engendered idea, by that the swarm has been kept away from the local optimum. is a factor used in the evolution process and can be articulated as, Quantum state of an idea is illustrated by a wave function ( ⃗ ) as an alternative of the position modernized only in Brain storm optimization algorithm. By using Schrödinger equation probability density function of the position is identified such that each idea is located. Monte Carlo simulation method is used, to measure the position for each idea from the quantum state to the traditional one. Step a: Initialize the parameters. Step b: Arbitrarily produce "n" ideas Step c: By k-means algorithm cluster "n" ideas. Step d: With a predetermined probability modernize the centre of a capriciously chosen cluster. Step e: Individual generation created. Step f: Quantum mechanism is exploited based on the chosen idea Step g: Crossover operator is implemented Step h: evaluate the new-fangled idea with the older one, Step i: If "n" ideas have been engender, then go to Step 9. Or else go to Step 5. Step j: Stop whether the present number of iterations Nc attain the Ncmax. or else, go to SIMULATION STUDY Proposed ABS optimization algorithm and QBS optimization algorithm has been tested, in IEEE 57 Bus system [18]. Table 1 shows the comparison results. CONCLUSION In this paper ABS optimization algorithm and QBS optimization algorithm successfully solved the optimal reactive power problem. In projetced ABS algorithm to escape BSO from local optima and to maintain the optimization process Hamiltonian cycle has been utilized. In the mathematical field of graph theory, a Hamiltonian path is a path in an undirected or directed graph that visits each vertex exactly once. In QBS approach by using Schrödinger equation probability density function of the position is identified such that each idea is located. Monte Carlo simulation method is used, to measure the position for each idea from the quantum state to the traditional one. Proposed ABS algorithm and QBS optimization algorithm has been tested in standard IEEE 57 bus test system and simulation results show the projected algorithms reduced the real power loss efficiently.
1,552.6
2021-03-01T00:00:00.000
[ "Physics" ]
CXCR4 promotes B cell viability by the cooperation of nuclear factor (erythroid-derived 2)-like 2 and hypoxia-inducible factor-1α under hypoxic conditions B cells that interact with T cells play a role in regulating the defense function by producing antibodies and inflammatory cytokines. C-X-C chemokine receptor type 4 (CXCR4) is a specific receptor for stromal cell-derived factor 1 (SDF-1) that controls various B cell functions. Here, we investigated whether CXCR4 regulates B cell viability by inducing hypoxia-inducible factor (HIF)-1α and nuclear factor (erythroid-derived 2)-like 2 (Nrf2) under a hypoxic condition in WiL2-NS human B cells. Nrf2 and CXCR4 expressions increased significantly when WiL2-NS cells were incubated under a hypoxic condition. Interfering with CXCR4 expression using CXCR4-siRNA inhibited cell viability. CXCR4 expression also decreased after treatment with a HIF inhibitor under the hypoxic condition, leading to inhibited cell viability. Increased reactive oxygen species (ROS) levels and the expression of HIF-1α and Nrf2 decreased under the hypoxic condition following incubation with N-acetylcysteine, a ROS scavenger, which was associated with a decrease in CXCR4 expression. CXCR4 expression was augmented by overexpressing Nrf2 after transfecting the pcDNA3.1-Nrf2 plasmid. CXCR4 expression decreased and HIF-1α accumulation decreased when Nrf2 was inhibited by doxycycline in tet-shNrf2-expressed stable cells. Nrf2 or HIF-1α bound from −718 to −561 of the CXCR4 gene promoter as judged by a chromatin immunoprecipitation assay. Taken together, these data show that B cell viability under a hypoxic condition could be regulated by CXCR4 expression through binding of HIF-1α and Nrf2 to the CXCR4 gene promoter cooperatively. These results suggest that CXCR4 could be an additional therapeutic target to control B cells with roles at disease sites under hypoxic conditions. Introduction B cells affect tumor development and behavior through pro-tumor and anti-tumor immune responses 1,2 . B cells produce antibodies and release various cytokines [3][4][5] to modulate immune responses 6,7 . B cells are exposed to a variety of oxygen concentrations that determine their migration, development, and differentiation 8 . Oxygen concentration is highly associated with the division, proliferation, and survival of cells [9][10][11][12][13][14] . Low oxygen tension is related to a variety of pathological conditions, including cancer, rheumatoid arthritis, chronic inflammatory bowel disease, and ischemia/reperfusion injury 15 . Hypoxia is a feature of physiological and pathological immunological niches 10 . Under hypoxic conditions, many immune cells, including B cells, play a role in controlling the disease condition 8 . However, little has been reported about how B cells are controlled under hypoxic conditions. Hypoxia and the hypoxia-inducible factor (HIF) signaling pathway are crucial for B cell development and function, such as survival, proliferation, and cytokine production 16 . Inappropriate regulation of B cells contributes to various diseases, including autoimmune, malignant, allergic, and other conditions 6,17 . Therefore, B cell behavior should be studied under hypoxic conditions to understand the regulation of B cell-associated diseases 18,19 . However, little is known about which factors play a role in regulating B cells under hypoxic conditions. HIF is a heterodimeric transcription factor comprising α and β subunits in the mammalian response to low oxygen level 20 . The two prolines on HIF-1α are hydroxylated by prolyl hydroxylase (PHD) and the HIF-1α protein is degraded by ubiquitination with the von Hippel-Lindau (VHL) complex under normoxic conditions 21 . Under hypoxic conditions, HIF-1α is stabilized, dimerized with the HIF-1β subunit, and binds to hypoxia response elements in the nucleus 22 . A hypoxic condition also increases the production of reactive oxygen species (ROS) to activate HIF1-α by inactivating PHD 23,24 . Hypoxia-driven ROS activates nuclear factor erythroid 2-like 2 (Nrf2), which plays a crucial role in regulating the transcription of antioxidant genes to reduce ROS accumulation 25 . Nrf2 is negatively regulated by binding to the Kelch-like erythroid cell-derived protein with cap'n'collar homologyassociated protein 1 and proteasomal degradation via Cullin3 26 . Then, Nrf2 translocates to the nucleus and subsequently binds to antioxidant response elements (ARE) on promoters of antioxidant genes, such as NADPH and heme oxygenase 1 27 . The Nrf2 protein is closely related to decreased survival and increased metastasis in several cell types 28 . B cell survival is associated with oxidative stress-mediated activation of Nrf2 29 . Chemokine receptors cause cells to migrate toward a chemotactic cytokine gradient (chemotaxis). Among them, CXCR4 is a 352 amino acid rhodopsin-like G protein-coupled receptor that selectively binds CXC chemokine stromal cell-derived factor 1 (SDF-1) known as CXCL12 30,31 . CXCR4 is selectively induced by activating HIF-1 on different types of cancer cells under hypoxic conditions [32][33][34][35] . The von Hippel-Lindau tumor suppressor protein pVHL negatively regulates CXCR4 expression by targeting HIF for degradation under normoxic conditions 36 . In addition, CXCR4 is expressed on monocytes, B cells, and naive T cells 37,38 . CXCR4 is produced by all subsets of B cells and plays a role in the homeostasis of B cell compartments and humoral immunity 39,40 . Little is known about how CXCR4 expression is regulated by Nrf2 and HIF-1α to control B cell survival under hypoxic conditions. In this study, we investigated whether B cell viability was regulated by CXCR4 via the cooperation of Nrf2 and HIF-1α under hypoxic conditions. Our data showed that CXCR4 expression was regulated by HIF-1α and Nrf2 in response to hypoxia-induced ROS, leading to the regulation of B cell survival. These results suggest that CXCR4 could be a novel therapeutic target to control B cells under hypoxic conditions. Cell cultures WiL2-NS, a human B lymphoblast cells was acquired from the Korea Research Institute of Bioscience and Biotechnology (KRIBB) cell bank (Daejeon, Korea). Cells were tested if they were free from mycoplasma contamination. Cells were incubated with RPMI medium 1640 (GIBCO, Grand Island, NY, USA) supplemented with 10% heat-inactivated fetal bovine serum (FBS) (GIBCO, Grand Island, NY, USA), 100 units/ml of penicillin/streptomycin and 2 mM L-glutamine (GIBCO, Grand Island, NY, USA) at 37°C humidified incubator with 5% CO 2 condition 41,42 . Hypoxia treatment 43 For incubation under hypoxic condition, cells were placed in an atmosphere of 1% O 2 , 5% CO 2 , 10% H 2 , and 84% N 2 with intermittent flushing with nitrogen, sealed, and then maintained in a humidified incubator at 37°C in a hypoxic chamber (Forma Anaerobic System, Thermo Electron Corporation, Marietts, OH, USA). Hypoxia-treated cells were harvested inside hypoxic chamber to prevent the rapid degradation of hypoxia-responsive molecules. Preparation of the stable Nrf2-knockdown (KD) cells 41 Lentiviral vector of Nrf2-shRNA (shNrf2) was packaged into virus particle by the method reported previously 44 , which was provided by Sang-Min Jeon, Professor, College of Pharmacy, Ajou University, Gyeonggido, Republic of Korea. 293T cells were transfected with lentiviral vector using Lipofectamine ® 2000 according to the recommended protocol on the Addgene website. Lentiviruscontaining conditioned medium (LCCM) was aliquoted into 1 ml stock in each cryovial. Then, WiL2-NS stable cells that do not express Nrf2 (tet-shNrf2, +Dox) were prepared as follows. Briefly, WiL2-NS 1 × 10 5 cells were incubated in each well of six-well plate overnight. Cell culture in 500 μl medium of each well was mixed with 1 ml LCCM and 1.2 μl polybrene (Millipore TR-1003-G). Culture medium was changed with 2 ml fresh medium containing 250 μg/ml hygromycin (Cayman 14291). The infected shNrf2-positive control cells (tet-shNrf2, -Dox) were selected by the treatment with hygromycin B (250 µg/ml) every 3 days. Nrf2-KD cells were obtained and maintained by the treatment with 0.2 μg/ml doxycycline (Cayman 14422) every 2 days. Cytotoxicity assay Cell survival was quantified by counting cells with trypan blue assay 41 . For trypan blue exclusion assay, cell suspension was mixed with equal volume of 0.4% trypan blue in PBS. Dying or dead cells were stained with blue color and viable cells were unstained. Each cell was counted by using hemocytometer under light microscope (Olympus Korea Co., Ltd, Seoul, Republic of Korea). Total cell number in each state was calculated by the multiplication with dilution factor. Measurement of ROS 41 Intracellular ROS level was determined by incubating cells with or without 10 μM of 2′,7′-dichlorofluorescin diacetate (DCF-DA) (Molecular Probe, Eugene, USA) at 37°C for 20 min. Fluorescence intensity of 10,000 cells was analyzed by CELLQuest™ analyzing software in FACS Calibur™ (Becton Dickinson, San Joes, CA, USA). Also, intracellular ROS level was observed with DCF-DA by fluorescence microscopy. Transfection of nucleic acids 45 Each plasmid DNA, siRNAs for CXCR4 and Accu-Target™ negative contol siRNA were transfected into cells as follows. Briefly, each nucleic acid and Viafect™ (Promega Co., Madison, USA) was diluted in serum-free medium and incubated for 5 min, respectively. The diluted nucleic acid and Viafect™ reagent was mixed by inverting and incubated for 20 min to form complexes. In the meanwhile, cells were stabilized by the incubation with culture medium without antibiotics and serum for at least 2 h prior to the transfection. Pre-formed complexes were added directly to the cells and cells were incubated for an additional 6 h. Then, culture medium was replaced with antibiotic and 10% FBS-containing DMEM and incubated for 24-72 h prior to each experiment. Nrf2 was overexpressed by the transfection of cells with pCDNA-Nrf2 plasmid DNA, which was accompanied with pCDNA3.1 for control group using Viafect™. WiL2-NS cell were transfected with wildtype or mutant pEZX-PG02-hCXCR4-Gluc plasmid DNA using ViaFect™ (Promega Co., Madison, WI, USA) to measure the activity of hCXCR4 promoter. At the same time, cells were cotransfected with pcDNA-lacZ for monitoring transfection efficiency by β-galactosidase assay. Then, cells were incubated for an appropriate time. Secreted Gluc reporter protein was obtained by the collection of cultureconditioned media after the indicated time intervals. Gluc activity of reporter protein was measured by Bio-Lux ® Gluc assay kit (New England BioLabs, Ipswich, MA, USA) including coelenterazine as a substrate for Gluc according to the manufacturer's protocol. Luminescence was measured using luminometer (Berthold Technologies, Oak Ridge, TN, USA). Luciferase activity unit was normalized to this control β-galactosidase activity. Chromatin immunoprecipitation (ChIP) assay ChIP assay were performed as describied previously 43,46,47 . Cells were crosslinked with final concentration 1% formaldehyde for 10 min at room temperature. Then, 125 mM glycine was added to quench unreacted formaldehyde. Cells were gathered and sonicated to make DNA fragments with a size range of 200-1000 bp. Cell extracts were immune-precipitated using 2 μg anti-Nrf2, anti-HIF-1α, or rabbit IgG control (Abcam, Cambridge, UK) for each sample suspended in 450 μl ChIP dilution buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 16.7 mM Tris-HCl, pH 8.1, 167 mM NaCl) purchased from Cell signaling Technology (Cat# 20-153, Danvers, MA). For all ChIP experiments, PCR analysis were performed by using multiple sets of primers spanning the transcription factor-binding site on hCXCR4 gene promoter. Samples were separated with protein size by SDS-polyacrylamide gel electrophoresis (SDS-PAGE). Separated samples of SDS-PAGE were transferred to nitrocellulose membrane. Membranes were blocked with 2% gelatin or 5% non-fat skim milk in Tris-buffered saline containing 0.05% Tween 20 (TBST). After blocking, the primary antibodies reactive specific targets were incubated for several hours. Secondary antibodies (horse radish peroxidase (HRP)-conjugated) were used to visualize targetspecific primary antibody by reaction with D-Plus™ chemiluminescence (ECL) system (Dongin Life Science, Seoul, Republic of Korea). Immuno-reactive targets were detected by X-ray films (Agfa healthCare, CP-BU new). Statistical analysis system Experimental differences were determined separately for statistical significance using Students' t-test and ANOVA. hCXCR4 expression regulates B cell viability under hypoxic conditions Given that hCXCR4 is expressed 37,38 and B cells play a role in controlling the disease condition 8 under hypoxic conditions, we examined whether hCXCR4 expression controlled B cell viability. To test the effect of hypoxia on B cells, WiL2-NS cells were incubated under normoxic and hypoxic conditions. Cell viability and total cell number decreased after incubation under hypoxic conditions compared with that under normoxic conditions as judged by the trypan blue exclusion assay ( Fig. 1A and B). In response to 36 h incubation, cell viability under hypoxic condition is 12% lower than that under normoxic (Fig. 1A) and an increase in total cell number was twice under hypoxic condition compared to that 6.5 times under normoxic condition (Fig. 1B). When WiL2-NS cells were transfected with pEZX-PG02-hCXCR4-gaussia luciferase (Gluc) plasmid DNA and incubated under hypoxic conditions, hCXCR4 promoter activity increased~20%,~30%, or~35% after 2, 4, or 8 h of incubation under hypoxic conditions (Fig. 1C). These results were confirmed by reverse transcription-polymerase chain reaction (RT-PCR) and qPCR analyses (Fig. 1D and E). hCXCR4 expression increased~1.2,~2.0, and~2.4 times after 2-, 4-, and 8-h incubations, respectively, under hypoxic conditions (Fig. 1E). To confirm the effect of hypoxia on hCXCR4 expression, we prepared a deletion mutant type (−100 to −35) of the rhCXCR4 promoter (Fig. 1F). When WiL2-NS cells were transfected with wild-type and mutant-type pEZX-PG02-hCXCR4-Gluc plasmid DNA and incubated under hypoxic conditions, Gluc activity decreased~45%,~60%, and~75% after 2-, 4-, and 8-h incubations under hypoxic conditions compared to those under normoxic conditions after deleting upstream of the hCXCR4 promoter (Fig. 1G). When hCXCR4 expression was inhibited by transfection with hCXCR4-siRNA as judged by RT-PCR (Fig. 1H, top) and western blot analysis under normoxic conditions (Fig. 1H, bottom), cell viability under hypoxic conditions decreased 33% compared to that treated with negative control siRNA (Fig. 1I), suggesting that B cell viability is regulated by hCXCR4 expression under hypoxic conditions. Hypoxia-induced HIF-1α regulates Nrf2 and hCXCR4 expression As hCXCR4 is regulated by hypoxia-induced HIF-1α 38 , we investigated whether hypoxic conditions regulate hCXCR4 expression in B cells. Thus, WiL2-NS cells were incubated under hypoxic conditions or with CoCl 2 . As shown in Fig. 2A, the hypoxic conditions enhanced HIF-1α, Nrf2, and hCXCR4 as judged by western blot analysis. Enhancement of these molecules was also confirmed by treatment with the chemical hypoxic mimicker CoCl 2 (Fig. 2B). Hypoxic condition is not the same to chemical hypoxia induced by CoCl 2 stabilizing HIF-1α. So, Nrf2 expression pattern in response to CoCl 2 is more longlasting than that by hypoxia. Increased hCXCR4 expression was attenuated by co-treatment with the HIF inhibitor according to the RT-PCR analysis (Fig. 2C) and was confirmed by qPCR. The HIF inhibitor inhibited hCXCR4 expression by~35% and~45% after 2-and 4-h incubations under hypoxic conditions, respectively (Fig. 2D), which was confirmed by western blot analysis (Fig. 2E). Changes in cell viability in response to the HIF inhibitor were assessed by the trypan blue exclusion assay. The HIF inhibitor Fig. 1 B cell viability is regulated by hCXCR4 expression under hypoxic condition. A, B WiL2-NS cells were incubated for 12, 24, 36 h under normoxic or hypoxic conditions. Cell viability was assessed by using trypan blue exclusion assay (A) and cell number was calculated (B). C-E WiL2-NS cells were transfected with pEZX-PG02-hCXCR4-gaussia luciferase (Gluc) plasmid DNA using Viafect™. After 30 h transfection, WiL2-NS cells were incubated with hypoxia and Gluc activity of hCXCR4 promoter (pmt) was measured by using luminometer (C). RNA was extracted by using Nucleozol. mRNA of hCXCR4 was measured by RT-PCR (D) or real-time qPCR (E). F, G Mutant promoter to hCXCR4 was prepared from wild type promoter (F). WiL2-NS cells were transfected with wild-type or mutant-type pEZx-PG02-hCXCR4-Gluc promoter plasmid DNA. 30 h after transfection, WiL2-NS cells were incubated under hypoxic condition and Gluc activity was measured by using luminometer (G). H, I Cells were transfected with CXCR4-siRNA under normoxic condition. RNA was extracted by using Nucleozol. mRNA of hCXCR4 was measured by RT-PCR (H, top). Cell lysates were prepared and the level of each protein was measured by western blot analysis (H, bottom). Cell viability under hypoxic conditions was assessed by trypan blue exclusion assay (I). Each experiment was performed at least five times. Data in bar or line graphs stands for the means ± SD. **p < 0.01; significantly different from control group under normoxic condition (A-C, E, G) or negative control siRNA-treated group (H). ## p < 0.01; significantly different from control transfected with wildtype hCXCR4-pmt at each time point (G). reduced B cell viability~33% under hypoxic conditions (Fig. 2F). These results suggest that hCXCR4 could enhance B cell viability and might be associated with the expression of Nrf2 and HIF-1α under hypoxic conditions. Hypoxia-induced ROS affect HIF-1α, Nrf2, and hCXCR4 expressions As hypoxia increases ROS 24 , we determined whether hypoxia changed intracellular ROS levels in B cells using DCF-DA. When WiL2-NS cells were incubated in the presence or absence of NAC under hypoxic conditions, ROS production was observed under a fluorescence microscope (Fig. 3A, top). ROS production was also measured by flow cytometry analysis (Fig. 3B, left), which could show the distribution of each single cell having different level of ROS. Then, results revealed that the mean fluorescence intensity (MFI) under the hypoxic condition was higher than that under a normoxic condition (Fig. 3C, light gray bar). ROS production decreased in response to NAC, which was assessed by a decrease in the number of fluorescent cells (Fig. 3A, bottom), a left-ward shift in the histogram (Fig. 3B, right), and reduced MFI (Fig. 3C, dark gray bar). hCXCR4 expression under the hypoxic condition was also attenuated by the NAC treatment. hCXCR4 expression was measured by qPCR, which Cell lysates were prepared and the level of each protein was measured by western blot analysis. C-F WiL2-NS cells were incubated in the presence or the absence of HIF inhibitor under hypoxic conditions. RNA was extracted by using Nucleozol. mRNA of hCXCR4 was measured by RT-PCR (C) or qPCR (D). Relative hCXCR4 transcripts were represented as bar graph (D). Cell lysates were prepared and the level of each protein was measured by western blot analysis (E). Cell viability was assessed by trypan blue exclusion assay and represented as bar graph (F). Each experiment was performed at least five times. Data in bar graphs represented the means ± SD. *p < 0.05, **p < 0.01; significantly different from control group under normoxic condition. # p < 0.05, ## p < 0.01; significantly different from HIF-1α inhibitor-untreated group (D, F). showed that NAC inhibited hCXCR4 expression~60% and~40% after 2-and 4-h incubations under hypoxic conditions, respectively (Fig. 3D). The reduced hCXCR4 expression in response to NAC was confirmed by RT-PCR (Fig. 3E) and western blot analyses (Fig. 3F). Nrf2 level in the absence of NAC was consistent with the previous result ( Fig. 2A) by the incubation for 2 h under hypoxic condition. The protein levels of HIF-1α and Nrf2 were inhibited by NAC under hypoxic conditions (Fig. 3F). These results suggest that hypoxiainduced ROS regulate hCXCR4 expression. Then, we predicted the Nrf2-binding sites in the hCXCR4 promoter sequence using the TRANSFEC (version 8.3) database ( Supplementary Fig. S1, online). To examine the interaction between Nrf2 and the hCXCR4 promoter, we performed a ChIP assay using an anti-Nrf2 antibody. As shown in Fig. 6A, Nrf2 bound to the hCXCR4 promoter under hypoxic conditions. HIF-1α also bound significantly to the hCXCR4 promoter under hypoxic conditions (Fig. 6B). However, while no binding site for HIF-1α was predicted in the hCXCR4 promoter sequence using the TRANSFEC (version 8.3) database, the sequence (−650 to −644, GCACRTG) on upstream of the second Nrf2-binding site (−636 to −626, TGATGCTGTGA) was partially matched to HIF-1α-binding site ( Supplementary Fig. S1, online). Then, hCXCR4 expression was cooperatively inhibited by treatment with the HIF inhibitor and Nrf2-KD (tet-shNrf2, +Dox) in WiL2-NS cells under hypoxic conditions (Fig. 6C). Cell viability was attenuated synergistically by treatment with the HIF inhibitor in Nrf2-KD cells under hypoxic conditions (Fig. 6D). These results suggest that hypoxiainduced ROS control CXCR4 expression through the cooperative interaction between HIF-1α and Nrf2 on its promoter. Through this molecular mechanism, CXCR4 regulates B cell viability under hypoxic conditions (Fig. 6E). promoter (pmt) was measured by using luminometer (A). RNA was extracted by using Nucleozol. mRNA of hCXCR4 was measured by RT-PCR (B) or qPCR (C). Cell lysates were prepared and the protein level of each protein was measured by using western blot analysis (D). E WiL2-NS cells were cotransfected with wild or mutant type of pEZx-PG02-hCXCR4-Gluc promoter plasmid DNA with pcDNA3.1 control or pcDNA3.1-Nrf2 plasmids using Viafect™. 30 h after transfection, WiL2-NS cells were incubated under hypoxic condition and Gluc activity was measured by using luminometer. Each experiment was performed at least five times. Data in a bar graph represented the means ± SD. *p < 0.05, **p < 0.01; significantly different from pcDNA3.1-transfected control group. # p < 0.05, ## p < 0.01; significantly different from control transfected with wildtype CXCR4 pmt plasmid (A, C, E). Discussion CXCR4 is a specific receptor for SDF-1 that control B cells at multiple stages of development 39 . CXCL12 is a homeostatic chemokine that signals via CXCR4 48 and plays an important role in the development, hematopoiesis, and organization of the immune system 49 . CXCL12 binding to CXCR4 stimulates various signal transduction pathways that regulate intracellular chemotaxis, calcium flux, transcription, and cell survival 50 . Hypoxia and the HIF signaling pathway play important roles in B cell function and development, which are crucial for B cell survival, proliferation, and cytokine production 16 . Cancer treatment requires B cells, but inhibition is required in the case of autoimmune diseases 18,19 . Therefore, determining the factors that control B cell survival is critical for treating hypoxia-related diseases. Here, we investigated whether B cell viability is regulated by CXCR4 via induction of HIF-1α and Nrf2 under hypoxic conditions in WiL2-NS human B cells. Cell viability decreased under hypoxic conditions by inhibiting CXCR4 expression in WiL2-NS cells. Also, hypoxia increased HIF-1α and Nrf2 expressions (Fig. 1). CXCR4 expression in WiL2-NS cells was inhibited by treatment with the HIF inhibitor, which also inhibited Nrf2 expression (Fig. 2). ROS production increased the expression of HIF-1α, Nrf2, and CXCR4 under hypoxic conditions, which decreased in response to NAC (Fig. 3). CXCR4 expression was inhibited by Nrf2 KD (Fig. 4), and increased by Nrf2 overexpression (Fig. 5). HIF-1α was also affected by Nrf2 KD (Fig. 4). CXCR4 expression increased by the binding of HIF-1α and Nrf2 to the CXCR4 promoter. B cell viability was cooperatively attenuated by Nrf2 KD and by treatment with the HIF inhibitor (Fig. 6). These results suggest that HIF-1α and Nrf2 cooperatively regulate CXCR4 expression under hypoxic conditions. It also suggests that hypoxia-induced CXCR4 protects B cells. CXCR4 is a receptor that selectively binds SDF-1 known as CXCL12 30,31 , and CXCR4 is related to HIF-1α, which facilitates cancer cell survival 39 through binding with its CXCL12 ligands 48,50 . It could explain the regulation of CXCR4 expression under hypoxic condition. Hypoxiainduced CXCR4 could bind more CXCL12 molecules and stimulates intracellular signaling pathways. Then, it is expected the possibility that there is a positive feedback mechanism by which CXCR4 regulates Nrf2 and HIF-1α expressions. It will be required to define the possible Sequences for primer set were CTACATCTGATCAGTCTCCAG (forward) and AGCCCATTCAGGAGGTAA (reverse). Primer set corresponds to -718 to -561 bp including the second Nrf2 binding (-636 to -626 bp) on hCXCR4 promoter. C, D The infected shNrf2-positive control (tet-shNrf2, −Dox) and Nrf2-KD (tet-shNrf2, +Dox) WiL2-NS cells were incubated in the presence or the absence of HIF-1α inhibitor under hypoxic condition. Cell lysates were prepared and the protein level of each protein was measured by using western blot analysis (C). Cell viability was measured by using trypan blue exclusion assay. Each experiment was performed at least five times. Data in a bar graph represented the means ± SD. *p < 0.05, **p < 0.01; significantly different from control group under normoxic condition. # p < 0.05, ## p < 0.01; significantly different from shNrf2-positive control (tet-shNrf2, −Dox) with untreatment of HIF inhibitor under hypoxic condition. $$ p < 0.01; significantly different from Nrf2-KD (tet-shNrf2, +Dox) or HIF inhibitor-treated group under hypoxic condition (D). E This is a schematic regulatory mechanism of B cell survival by binding Nrf2 and HIF-1α on hCXCR4 promoter under hypoxic condition. It suggests that hypoxia-induced ROS controls CXCR4 expression via the interaction of HIF-1α and Nrf2 on its promoter cooperatively. mechanism on B cell response by further study including other specific cells to release CXCL12. In the meanwhile, if there is a positive feedback mechanism by binding CXCL12 to CXCR4, it is considered that the decrease in survival rate caused by inhibition of CXCR4 under hypoxia is due to the disruption of the antioxidant defense system brought by Nrf2. So, it might not rule out that CXCR4 and HIF-1α regulate cell survival independently under hypoxic condition. It is also possible for CXCR4 to regulate B cell survival through the secretion of CXCL12 from WiL2-NS cells under hypoxic condition. However, CXCL12 is mainly released by cancer-associated fibroblasts (CAFs), macrophages 51 , and bone marrow stromal cells (BMSCs) 52 . B cells are recruited to stroma cells through CXCR4-CXCL12 interaction. Then, it is impossible for B cells to release CXCL12 and be influenced by its autocrine effect under hypoxic condition. CXCR4-targeted therapeutic approaches are being evaluated in preclinical studies to treat various diseases, including cancers 49,53 . Inhibiting CXCR4 inhibits tumor growth, reduces lung metastasis, and improves survival after sorafenib treatment 54 . Our data show that CXCR4 also plays a role in controlling B cell survival under hypoxic conditions, suggesting that CXCR4 could be a pivotal molecule regulating B cells in the tumor microenvironment. In contrast, CXCR4 enhanced B-1a cell migration to bone marrow, which produce IgM antibodies during health and disease 55 . Thus, controlling molecular changes under hypoxic conditions is a therapeutic strategy for treating cancer and inflammatory disease 15 . Our data show that HIF-1α and Nrf2 may protect B cells against hypoxiainduced ROS by increasing CXCR4. The findings suggest that the strategy to control CXCR4 expression could differ depending on the cells targeting different types of diseases. Hypoxia is a hallmark of infected, inflamed, or damaged tissue 56,57 . The increase of ROS in response to hypoxia causes oxidative damage to cells 58 , which occurs in most disease conditions 23,25,59 . Mitochondrial ROS are engaged in transcriptional and translational regulation of HIF-1α by inhibiting PHD 24 , particularly through the ERK and PI3K/ AKT pathways 23 . The von Hippel-Lindau tumor suppressor protein pVHL negatively regulates CXCR4 expression owing to its capacity to degrade HIF under normoxic conditions 36 . Peroxisome proliferator-activated receptor gammadependent downregulation of CXCR4 in cancer cells slows the rate of metastasis 60 . So, controlling CXCR4 changes will be a useful strategy to treat various diseases. Taken together, hypoxia-induced ROS may regulate CXCR4 expression through the cooperation with HIF-1α and Nrf2. Through this molecular mechanism, we suggest that B cell survival is regulated by hypoxia-induced CXCR4 expression, which could be a novel therapeutic target for hypoxia-associated diseases.
6,168.4
2021-03-26T00:00:00.000
[ "Biology", "Medicine" ]
Converting DICOM to STL for 3D Printing: A Process, and Software Package Comparison Background : Extracting and three-dimensional (3D) printing an organ in a region of interest in DICOM images typically calls for segmentation in support of 3D printing as a first step. Next, the DICOM images are converted to STL data. After primary and secondary processing, including noise removal and hole correction, the STL data can be 3D printed. The quality of the 3D model is directly related to the quality of the STL data. This study focuses and reports on conversion performance for nine software packages. Methods : Multi-detector row CT scanning was performed on a dry human mandible with two 10-mm-diameter bearing balls as a phantom. The DICOM images file was then converted to a STL file using nine different commercial/open-source software packages. Once the STL models were constructed, the data properties and the size and volume of each were measured and differences across the software packages were noted. Additionally, to evaluate differences between the shapes of the STL models by software package, each pair of STL models was superimposed, with observed differences between their shapes characterized as shape error. Further, deformation caused by reduction in the number of triangles was evaluated. Results : The data size and the number of triangles were different across all software packages. The constructed ball STL model expanded in the X-, Y-, and Z-axis directions, with the length in the Z-axis direction (body axis direction) being slightly longer than other directions. There were no significant differences in shape error across software packages for the mandible STL model. No shape change was observed relative to reduction in the number of triangles. Conclusions : Statistically, no significant differences were found across software packages for size and volume. However, different characteristics of each software package were noticeable, such as different effects in the thin cortical bone area, likely due to the partial volume effect, which may reflect differences in image binarization algorithms. Although the shape of the STL model differs slightly depending on the software, our results indicate that shape error in 3D printing for clinical use in oral and maxillofacial surgery remains within acceptable limits. Abstract Background : Extracting and three-dimensional (3D) printing an organ in a region of interest in DICOM images typically calls for segmentation in support of 3D printing as a first step. Next, the DICOM images are converted to STL data. After primary and secondary processing, including noise removal and hole correction, the STL data can be 3D printed. The quality of the 3D model is directly related to the quality of the STL data. This study focuses and reports on conversion performance for nine software packages. Methods : Multi-detector row CT scanning was performed on a dry human mandible with two 10-mmdiameter bearing balls as a phantom. The DICOM images file was then converted to a STL file using nine different commercial/open-source software packages. Once the STL models were constructed, the data properties and the size and volume of each were measured and differences across the software packages were noted. Additionally, to evaluate differences between the shapes of the STL models by software package, each pair of STL models was superimposed, with observed differences between their shapes characterized as shape error. Further, deformation caused by reduction in the number of triangles was evaluated. Results : The data size and the number of triangles were different across all software packages. The constructed ball STL model expanded in the X-, Y-, and Z-axis directions, with the length in the Z-axis direction (body axis direction) being slightly longer than other directions. There were no significant differences in shape error across software packages for the mandible STL model. No shape change was observed relative to reduction in the number of triangles. Conclusions : Statistically, no significant differences were found across software packages for size and volume. However, different characteristics of each software package were noticeable, such as different effects in the thin cortical bone area, likely due to the partial volume effect, which may reflect differences in image binarization algorithms. Although the shape of the STL model differs slightly depending on the software, our results indicate that shape error in 3D printing for clinical use in oral and maxillofacial surgery remains within acceptable limits. Background 3 Digital Imaging and COmmunications in Medicine (DICOM) is the leading standard around the world within the medical imaging information field. Three-dimensional (3D) printing from DICOM images has become easier with advancement of technologies such as medical engineering, imaging engineering, and the evolution and decreasing costs of hardware and software. Patient-specific 3D models are now being used in many situations within the oral and maxillofacial surgery fields, including education, surgical planning, and surgical simulation [1][2][3][4]. 3D printing of DICOM images works with stacked 2D images that must be converted into a data format required by the 3D printer. For this purpose, DICOM images are now being converted to 3D CAD (Computer-Aided Design) format for intermediate data, on which primary-processing such as region of interest (ROI) setting can be performed. Of the approximately 40 file formats of 3D CAD data that are used as 3D native files and intermediate files, an STL (STereoLithography) file format is the most commonly used for 3D printing. There are also many commercial (fee-based) and open-source (free-of-charge) software packages for converting DICOM images into STL data, all of which can run on a general-purpose personal computer (PC). In 2018, we reported in 3D Printing in Medicine a "one-stop 3D printing lab" that enables data construction for 3D printing in one facility [5]. In this lab the first step toward 3D printing is converting the DICOM images into STL data and constructing the STL (3D CAD) model. We have found that the shape of the constructed STL model varies slightly from one software package to another. The quality of the STL data affects the 3D printing, and insufficient STL data can lead to the unsuccessful fabrication of 3D models. We focus on the performance of software packages that convert DICOM images into STL data and report on a comparative analysis across the packages to understand the differences of each and their characteristics. The purpose of this study was to investigate the points to be noted in designing STL data for 3D printing the higher definition of 3D models in the field of oral and maxillofacial surgery. Methods In this study, forms of software that convert DICOM images into STL file format data (or that offer a conversion function) are referred to as "software packages", and a 3D surface model (virtual 3D model) constructed from STL data is referred to as "an STL model". Multi-detector row CT (MDCT) scanning was performed on a dry human mandible with two 10-mmdiameter aluminum bearing balls attached to the left and right mental regions as phantoms. A gap of about 1 mm was maintained between the mandible and ball to aid segmentation in PC. The DICOM images have been converted to a binary STL file using one of these packages. First, the size and volume of each STL model were measured. Besides, all mandible STL models were compared to gauge whether there were differences in the shapes of constructed STL models that could be correlated with differences in software, and if so, which areas were affected. Finally, shape changes due to reduction in the number of triangles were noted. 1. DICOM to STL data conversion Table 1 shows details of the nine software packages available for this purpose that can be run on a PC. ROI and threshold were set for each software package to construct the STL model. The threshold for binarization was set to 350 as a voxel value (brightness value) corresponding to a CT value across all software packages. For packages that support a parameter for resolution, it was set to "maximum". Some software packages were able to reduce the data size when converting to STL data; for these, "no data size reduction (or minimum)", "no smoothing" was selected. The STL data was exported in binary format. ImageJ, by default, does not have an STL convert/export function, so a plugin tool (3D Viewer, https://imagej.nih.gov/ij/plugins/3d-viewer) was installed. 2. 3D coordinate system and measurement Figure 1 shows the coordinate system in 3D space, and measurement of the length of the STL models in the X-, Y-, and Z-axis directions using the polygon editing software POLYGONALmeister Ver. 4 (PMV4, UEL Corp., Tokyo, Japan) [6]. The coordinate system used in this study was based on the DICOM standard: the positive X-axis points toward the phantom's left side, the positive Y-axis points toward the phantom's posterior and the positive Z-axis points from inferior to superior direction. Superimposition and shape error evaluation To determine shape error (shape differences between two models that are signed differences), CAD comparison and inspection software SpGauge 2014.1 (SpG, Aronicos Co., Ltd., Shizuoka, Japan) was used for performing superimposition and measurement. For the superimposition, one of two STL models was moved using the best-fit surface-based registration algorithm of SpG, with the operation repeated until the movement amount with the other STL model approached as close to 0.00 mm as possible. Mean, maximum, and minimum shape errors were recorded, with expansion indicated as positive and contraction indicated as negative. In the color mapping, positive errors are displayed in warm colors and negative errors are displayed in cool colors. Statistical analysis Correlation between the mandible STL model and the ball STL model was determined using Spearman's rank correlation coefficient applied to the difference between lengths in each of the X-, Y-, and Z-axis directions, and also differences in volume. Comparisons between ball STL models were performed by one-way ANOVA followed by Tukey's multiple comparison test. After superimposition, the shape error of mandible STL models was evaluated using the Kruskal-Wallis test, and multiple comparisons via the Steel-Dwass test. Statistical analysis was performed using open-source statistical analysis software R Ver 3.6.1 [7], with a statistical significance level set at 5%. 6 The data size of each STL model and the number of triangles in it for each software package were shown in Table 2. For the ball STL model, lengths in the X-, Y-, and Z-axis directions exceeded 10 mm, with length of the Z-axis direction longer than those of the X-, and Y-axis directions, with significant differences between lengths of the ball STL model across software packages (Fig. 2). One software package (MCS) showed larger values for lengths of X-and Y-axis directions compared with the other eight software packages (Fig. 3). A negligible to low correlation was observed between the ball STL model and the mandible STL model for the lengths of the X-, Y-, and Z-axis directions. With regard to volume, a high correlation was found between the ball STL model and the mandible STL model (Table 3). One software package (IN3) showed a larger value than the other eight packages (Fig. 4). Evaluation after superimposition of the STL models found slight variations for each software package, with a mean shape error of 0.11 mm, maximum shape error of + 1.69 mm, minimum shape error of -1.55 mm, median shape error 0.08 mm and 95% confidence interval, 0.08 to 0.135. No significant differences were found for shape error across software packages (Fig. 5). Discussion Difficulty of 3D printing in oral and maxillofacial surgery Because our 3D printing system uses a fused deposition modeling (FDM) desktop 3D printer, which is suitable for fabricating solid 3D models, our fabric target is teeth and jawbones. Our system makes it possible to fabricate "inexpensive" 3D models for oral and maxillofacial surgery. 3D models are particularly useful because curved surfaces and minute areas are difficult to understand via a PC display [5]. However, knowledge about 3D printing is sparse, especially with regard to how to create "necessary and sufficient" data rather than utilization of case reports of 3D models. We therefore needed to learn 3D printing through trial and error. We divided our workflow into three steps, each of which requires a different file format. Step 1 involves acquiring a 3D volume image of the patient as a DICOM images file. Step 2 entails segmenting the anatomical structure from surrounding structures and converting/exporting the segmented virtual 3D model in STL data format. Segmentation of hard and soft tissue is relatively easy. However, in many cases it is difficult to construct an STL model for two reasons. One reason is that thin hard tissue (e.g. bone surrounding the nasal cavity, orbital floor), and narrow tissue gaps (e.g. upper and lower joint cavity between the temporal bone and the mandible) are not clearly reproduced in the STL model. Secondly, many artifacts (e.g. metal artifacts and/or beam-hardening from dental prostheses) reduce the readability of the images and prevent segmentation. Step 3 concerns 3D printing the physical 3D model, which requires use of G-code generation software [8] to produce G-code as 3D printable data. Each step of the entire process-segmentation of DICOM images, processing of STL data, generation of G-code data, and performance of the 3D printer itself-affects the accuracy of the final 3D model. Constructing STL data is the most important operation in fabricating the 3D model. Characteristics of DICOM to STL conversion software Additive manufacturing using desktop 3D printers requires 3D CAD data to represent the 3D shape. The STL file format system manages this by using a collection of small triangles or polygons. A curved surface is formed by thinning the triangles that compose the 3D model. Software forms used in this study include software equipped as a function of a medical image viewer, software developed in the field of CAD engineering, and software developed as a function of 3D printing software used in the industrial field; they therefore come from a range of different backgrounds. Appearances of the constructed STL models differed across software packages. Most notably, the cortical bone of the top and/or lateral pole of the mandibular condyle was thin, so the reproducibility of this part was different across all software packages (Fig. 6). When "faithfully" fabricating according to this STL model, the steps would appear as holes (defects). Moreover, in some software packages, the surface of each STL model was rough. Although the ball STL model was constructed by MDCT scanning of a 10-mm-diameter bearing ball, all software packages rendered it expanded in all directions. The average ball length in all directions was 10.52 mm, but the length in the Z-axis direction was slightly longer than in the X-and Y-axis directions. This is likely because of differences in voxel size of DICOM images (X-, Y-, Z-axis direction lengths were 0.468, 0.468, and 0.500 mm, respectively), and may also have been affected by the partial volume effect that occurred on the border between the ball surface and the air. The diameter of the ball in the STL model was calculated from the mean value of the volume (605.23 ± 42.38 mm 3 ) as 10.49 mm. The shape error for this entity was equivalent to the size of one voxel, and was reproduced by each software package. It is difficult to quantitatively assess the STL conversion performance of each software package independently. To solve this problem, this study superimposed pairs of STL models (constructed with different software packages) on each other; the difference between each pair was visualized and measured as a shape error. Although differences between shapes of the constructed STL models were visible on the shape error image, no significant statistical differences were found across all mandible STL models. Figure 7 shows images captured by superimposition and visualization of S3D and MIT, which had the minimum shape error. Figure 8 shows images of MCS and VE3 having a maximum shape error. The reason the shape errors could be seen by the software packages, though only slightly, was that the binarization algorithms differ across software packages. There are various binarization methods [9]. The shape differences appeared because of differences in image processing near threshold values, such as the thin cortical bone or strongly curved surface. The color map of Fig. 8 is colored as a green to yellow area, with mean distances of around 0.30 mm. This is smaller than one voxel size. Regarding the roughness of the surface of the STL model, it was thought that the influence of the unevenness was small in measuring. Therefore, it was considered that the shape error was not affected. Therefore, it can be assumed that this kind of error is acceptable in fabricating 3D models for clinical use in oral and maxillofacial surgery [10][11][12]. STL data represent a 3D shape as a collection of small triangles. The number of triangles depends on the size, shape and internal structure of the object. More complex features and higher resolution lead to an increase in the number of triangles in the converted/exported STL data. Processing a large number of triangles draws heavily on the processing power of a PC; the calculation is time-consuming and can affect subsequent operations. Reduction in the number of triangles directly leads to a reduction in data size. However, a reduction in the number of triangles may also cause a change in shape [13]. Therefore, the mandible STL model was superimposed before and after the reduction in the number of triangles to evaluate the dimensional change, and the shape error was observed. To reduce the number of triangles to 200,000, i.e. the number of triangles recommended in the report [14], the "simplify data by specifying the number of triangles" function of PMV4 was used [15]. was somewhat rough when displayed on the monitor, the resultant shape error of that STL model relative to the models with the largest and the mean numbers of triangles was almost 0 mm. Considering that the minimum laminating pitch of the FDM desktop 3D printer we use is 0.05 mm, this supports the inference that the recommended number of triangles was both necessary and sufficient for 3D printing. Limitations and prospects The shape errors are inevitable because of the spatial resolution limits of MDCT. However, when using 3D models in fields that require more detailed operations, such as microscopic surgery, other modality options should be considered, such as the use of limited cone-beam CT, which expected that produces a better high-definition STL model. In this study, a MDCT scanner was used to convert DICOM images to STL data under the condition of fixed voxel value binarization threshold. In addition to differences between patients, physics-based factors such as irradiation dose and other differences in MDCT models and scanning parameters may also affect the difficulty of constructing STL models [16,17]. Setting a threshold for 3D printing requires medical knowledge, especially tomographic image anatomy, as well as knowledge of modalities of imaging principles. For example, in surgical simulation, it is necessary to reproduce not only the 2D shape but also the 3D shape of the lesion. Detection of the lesion depends on the skills of a radiologist who has comprehensive knowledge of anatomy, disease, and surgical techniques. At present, it is common practice to fabricate 3D models cooperate with to the purpose of use in consultation with a technician. To become more familiar, it will be important to train radiologists in combinations of medicine, 3D printing technology, digital engineering, and image engineering. To that end, it may be advisable to introduce 3D printing technology at the undergraduate level of medical/dental education. Conclusions We evaluated nine commercial/open-source software packages that convert DICOM images into STL data. Our evaluation included superimposing STL models constructed by different software packages over each other, to visualize and measure shape error. However, the slight differences we found were negligible and can be considered acceptable for clinical use of 3D models in oral and maxillofacial surgery. In designing data for 3D printing of fine and/or thin structures such as mandibular condyle shown in this study, it is important to pay close attention to setting the threshold for ROI and binarizing DICOM images when converting. In conclusion, when using STL data conversion software, it is important to understand the features and characteristics of the software package to carefully align its use with the intended purpose. Ethics approval and consent to participate The study protocol was reviewed and approved by the institutional review boards of the participating institutions. Consent for publication The authors grant the publisher the sole and exclusive license of the full copyright in the contribution, which licenses the publisher hereby accepts. Consequently, the publisher shall have the exclusive right throughout the world to publish the contribution in all languages, in whole or in part, including, without limitation, any abridgment and substantial part thereof, in book form and in any other form including, without limitation, digital, electronic and visual reproduction, electronic storage and retrieval systems, including internet and intranet delivery and all other forms of electronic publication now known or hereinafter invented. The authors guarantee that the contribution to the work has not been previously published elsewhere, or that if it has been published in whole or in part, any permission necessary to publish it in the work Shape error (signed differences) measurement after superimposing pairs of STL models, using SpG. The black square indicates mean value, the upper limit indicates maximum value, and the lower limit indicates minimum value. Comparison of STL models between S3D (a) and MIT (b) where the shape error between the two STL models was the minimum value. Visualization of shape error (signed differences) after superimposition is shown on the right (c). Almost all of the STL model was green. The mean error between the two STL models was 0.00 mm (maximum +0.16 mm, minimum Comparison of the STL model of MCS (d) and of VE3 (e) which evidenced the largest shape error between any two STL models. Visualization of shape error (signed differences) after superimposition is shown on the right (f). The whole mandible is depicted as green to yellow (shape error range of about 0.0 mm -0.5 mm), with occasional orange to red parts. The mean shape error was 0.27 mm (maximum +0.80 mm, minimum -0.81 mm). Figure 9 Visualization of the STL model constructed with IN3, which had the largest volume and number of triangles, the STL model with the reduced number of triangles, and the shape error (signed distances) after superposition. When the original number of 1,247,962 (a) triangles was reduced to 200,000 (b), the surface of the STL model appeared to be slightly rough. In the color map, the entire area was green (c). The mean shape error was 0.02 mm.
5,240.2
2020-03-10T00:00:00.000
[ "Computer Science", "Engineering" ]
Pellet Softening Process for the Removal of the Groundwater Hardness ; Modelling and Experimentation A lab scale pellet reactor (PR) was designed and fabricated to carry out extensive investigations on the removal efficiency of the hardness of groundwater.  The groundwater of 2200 – 2600 mg/L hardness was collected from Abdulla Ibnalhassan wells area located at the west desert of Al-Shinafiyah district (70 km to the southwest of Al-Dewaniyah city, Iraq). Both hydrodynamic parameters of the pellet reactor (porosity and fluidized bed height) and the parameters of calcium carbonate crystallization process (calcium carbonate equilibrium, pellet size, and density) were modeled and compared with the experimental results of the lab scale pellet reactor. The comparison showed that fair agreement between modeled and measured results was observed. The removal efficiency of both calcium and magnesium ions were 62.5-99% and 83-99% respectively. The removal efficiency was found to be strongly dependent on pH and the ratio of NaOH solution flow rate to the groundwater flow rate in the pellet reactor. Introduction Pellet Reactor (PR) is increasingly used to remove the hardness of groundwater. PR is favored over other softening processes of groundwater due to its low produced sludge, low capital cost, and low maintenance. The main reactions of groundwater softening are shown below [ PR is simply a cylindrical column filled to a certain depth with a seeding material (sand particles) to crystalize Ca+2 ions and form calcium carbonate on its surface [2]. The literature presents many research articles that cover most important aspects of groundwater softening process and PR role [3]. The reactor is firstly filled with granular materials such as sand and quartz. The hard water and softening agent are pumped to the reactor at suitable flow rates to fluidize sand granules and stimulate crystallization of CaCO3 [4]. Crystallization of CaCO3 on the surfaces of sand particles, in a pellet reactor, can be divided into two stages: nucleation and growth up [5]. Nucleation may be described as CaCO3 crystals formation. Growth up is the process in which CaCO3 crystals crystallize at the surface of sand or quartz granules in the form of layers [6]. Mahvi et al. [7] successfully modeled the hydrodynamics and growth rate of crystals of a softening lab scale fluidized bed reactor. He used two-step crystal growth model rather than the over-all model. However, utilizing the two-step growth model suffers the lack of the surface-reaction order, and a lean fluidized bed reactor was proposed to include the surface-reaction order. The crystal growth process of low soluble salts seems more complicated than that of soluble salts. Then, a comparison between crystal growth kinetics of low soluble salts and the lean fluidized -bed crystallizers was demonstrated. As a consequence, the two-step growth model was found to be convenient for the assessment of crystal growth rates in the design of a liquid fluidized bed reactor [8]. Schagen, et al. [9] successfully simulated the pellet reactor and showed the effect of operational parameters (particle size and height of fluidized bed) on the removal efficiencies of the total hardness, calcium ions, and magnesium ions. They observed that maintaining the particle size, the reflux ratio, and the flow rates at optimum values can improve the pellet reactor performance. Schagen, et al. [10] (explained through a mathematical model the relationship between the size of the discharged pellets and the saturation rate of CaCO3 in the treated water. They found that using smaller particle size of seeding materials led to a significant decrease in the saturation rate of CaCO3 in treated water and improved the pellet reactor performance. Schagen, et al. [11] observed that the growth process of crystals of weakly soluble salts looks more intricate than that of high soluble salts. As a result, the two-step growth model was found to be suitable to simulate the rate of crystal growth in fluidized -bed reactor. Hu, et al. [12] developed a new mathematical model to study the effect of superficial velocity, particle size, and supersaturation on the rate of pellet particle growth and the fixed bed height growth rate. The study showed a linear relationship between the rate of particle growth and both superficial velocity and supersaturation. The present research aims to study the removal efficiency of total hardness and the modeling of the pellet reactor which is composed of two parts; the first part includes the fluidization and fluidized bed hydrodynamics, while the second part is related to the crystallization process of CaCO3 on the surfaces of seeding sand particles. The first part aims to model the relation between porosity, pellet diameter, and water velocity. The second part aims to model the crystallization of CaCO3 as a shift in the equilibrium between the soluble and solid states of calcium. Seeding Materials Sand with a particle size of 0.426 -2 mm was used as a seeding material to crystalize CaCO3 on its surfaces. The density of the sand was 1.431 g/cm 3 . Sand weight was measured before and after each experiment to determine the crystalized CaCO3 on the surfaces of sand particles. Nomenclature dg Average diameter of sand grain (m) dg1 Average diameter of sand grain , type 1 ( m) dg2 Average diameter of sand grain , type 2 ( m) dp Average diameter of calcite Pellet (m) dp1 Average diameter of calcite Pellet ,type 1 ( m) dp2 Average Density of the first type of calcite pellets (kg/m 3 ) ρp2 Density of the second type of calcite ( kg/m 3 ) ρw Density of groundwater ( kg/m 3 ) NaOH Granules NaOH granules from Sigma-Aldrich were used to prepare 0.625 molarity (2.5% mass concentration) NaOH solution. The characteristics of NaOH granules are shown in Table (2). Description of the Lab scale Pellet Reactor (PR) The Lab scale PR shown in Figures (2, 3) was used for removing total hardness, calcium ions, and magnesium ions for groundwater. It is simply a fluidized bed column made from Plexiglas with an inner diameter of 6 cm and 170 cm height. The groundwater was fed (injected) through three nozzles mounted at the bottom plate of the column, while the NaOH solution was fed (injected) through one nozzle located at the center of the bottom plate. Nozzles details are schematically shown in Figure (4). The PR was equipped with two rotameters to measure both groundwater and NaOH solution flowrates. Water rotameter measuring range is (25-250) L/hr and NaOH solution rotameter are in the range of (6-30) L/hr. A sand filter with 12 cm outer diameter and 27 cm height was installed at the outlet of the PR to capture suspended materials. Two pressure measuring devices (0.0 -6.0 bars) were installed on the lines of groundwater and NaOH solution supply for measuring pressure head of both supply lines. Three plastic tanks were used for storing groundwater, NaOH solution and treated water. 1.575 kg of a seeding material was used to fill a height of 42 cm of the FBR. The seeding material (sand) was brought from local sources. Porosity Ergun model is widely utilized to calculate the porosity of the fluidized bed. Porosity is fundamentally calculated depending on balancing the pressure gradient across the fluidized bed, the mass of the pellets, and the acting drag force of the water on the calcite pellets [13]. The pressure gradient is expressed as: The pressure gradient caused by the acting drag force of the water on the calcite pellets is represented by equation (2). 1 is the drag coefficient which is expressed in terms of the hydraulic Reynolds Number (Re h ). Where, ℎ is the particle Reynolds number, expressed using the following empirical equation [14]: For the range of ℎ between (5 -100), CW1 is expressed in equation (5). Finally, the general form of the porosity equation is: This equation has generally been utilized in most researches to find the porosity of the pellet softening column [14][15][16]. The height of fluidized bed The height of the fluidized zone of sand particle in the Pellet reactor is calculated as shown in equation (7) [17]: 3 Pellet Size and Density The pellet diameter relies on the accumulated mass of calcite (CaCO3) on the surface of sand particles. The pellet size is determined using the following equation [17]: The calcite pellets density is represented by the mass of calcite (CaCO3) on sand surface ( ) and the mass of the sand particle ( ), as follows [17]: Two types of sand have been used, so the pellet size and density of the two types are: The average pellet size and density is calculated as follow: ρp= 0.29 ×ρp1 at 2mm + 0.71× ρp2 at 0.5 mm (14) dp= 0.29 ×dp1 at 2mm + 0.71× dp2 at 0.5 mm The equilibrium of calcium carbonates (CaCO3) Calcium carbonate crystallization occurs when the solid and soluble calcium carbonate (CaCO3) are in equilibrium [18]. The activity coefficient (f) relies on the water ionic strength (IS) and it's represented by the following equation: The ionic power determines the ionic strength of solutions by the concentration of ions in the solution. When compounds dissolved in water, they are divided into ions and their ionic strength is expressed by using equation (18) The supersaturation of calcium carbonate (CaCO3) in water is based on two parameters. The saturation index (SI) and PH which can be represented as the PH offset (PHs) when the actual calcium ion concentration (Ca +2 ) is in a balance with the carbonate (CO 3 −2 ).The crystallization driving force is determined by the saturation index (SI). The two indicators are highly attached and utilized for determining the crystallization process performance. The saturation index and the PH offset are determined using equations (19) and (20) respectively. PHs = PH-SI (20) The change in calcium concentration in equation (21) is determined by the supersaturation of calcium carbonate, the calcite pellets specific surface (S), and the crystallization kinetics (K) [20]. Where, Ks is the solubility product, which is a function to concentrations of calcium and carbonate, and it is calculated by equation (22). Then the modeled calcium concentration as follows: Equation (24) is used to calculate the specific surface of calcite pellets (S) as a function of bed porosity (p) and diameter of pellets (dp). = 6(1 − ) (24) The supersaturated water with calcite that transport to seeding material surface relies on the temperature and water flow, the crystallization constant is calculated from equation (25) (32) Modeling work There are two main models that are commonly used to model pellet softening reactors. The first model is Ergun model (1952) [13 ] which is the exerted forces on the pellets in the reactor, and the second model is Zaki model (1954) [14 ]which is based on an expansion formula, because Zaki model is preferred for full scale PR experiment, Ergun model was adopted in this chapter for lab scale PR experiments. Figures 8,9,and 10 show the effect of water flow rate on the expansion of the bed (bed height) and consequently on the pellet diameter. As the bed height increases, the pellet diameter decreases due to higher water upward velocity which causes a higher rate of erosion of the pellet surface. However, for water flow rate of 50 L/hr., the pellet diameter decreased from 0.937 -0.936 when the bed height increased from 0.81 -0.85, which is lower than the bed height for flow rates 25, and 37.5 L/hr. This may be due to a probable homogeneity of bed expansion at higher water velocities. Figures 14,15,and 16 show fair agreement between modeled and measured concentrations of Ca +2 ions. The 50 L/hr. water flow rate showed better agreement between modeled and measured concentrations. The higher water flow rates demonstrated good mixing of large and fine particles within the expanded (fluidized) bed and better distribution of Ca +2 ions. The 25 L/hr. water flow rate reflects the case where large particles were settled at the bottom of the bed and minor expansion of bed was noticed. This can be attributed to a considerable inhomogeneity of Ca +2 ions in the expanded bed. Figure 17 demonstrates that a good agreement between measured and modeled values of calcium ions concentration was noticed for the nine experiments in this research. Comparison between modeled and measured fluidized bed height (m) A fair agreement between modeled and measured fluidized bed height was observed as shown in Figure 18. However, higher water flow rate results in better agreement between modeled and measured fluidized bed height. The correlation coefficient of the fitting in Figure 18 is 0.94, which indicates well correlated data this agreed with Hu, et al. [26] which they compared between modeled and measured fluidized bed height, the correlation coefficient of their study was 0.95. Figure 19 shows fair agreement between modeled and measured values of fluidized bed height for the nine experiments in this research work. Comparison between modeled and measured calcium carbonates mass (kg) deposited on the surfaces of the pellets A fair agreement between modeled and measured mass of calcium carbonates deposited on the surfaces of the pellets was observed as shown in Figures 20, 21, and 22. However, higher water flow rate leads to better agreement between modeled and measured mass of calcium carbonates. This also can be attributed to the better homogeneity of fluidized particles of sand in the expanded bed of the pellet reactor. Figure 23 shows fair agreement between modeled and measured values of calcium carbonate mass for the nine experiments in this research work. Conclusions Very high removal efficiencies of total hardness were obtained when flow rates of the groundwater and NaOH solution are properly determined. The obtained results revealed that 78.7 -99.1% efficiencies can be achieved when the ratio of NaOH solution flow rate to the groundwater flow rate ranges from 0.24 to 0.72 with an initial pH of 7-10.3. Modeling results of the groundwater softening process using pellet reactor showed that fair agreement between measured and modeled values of the removal efficiency, crystallized mass of calcium carbonates, bed porosity, pellet diameter and density distribution in the expanded (fluidized) bed. The successful modeling of the pellet reactor led to reducing cost of experimentation and may lead to better understanding of groundwater softening processes.
3,297
2019-09-30T00:00:00.000
[ "Engineering" ]
A novel approach to Bayesian consistency : It is well-known that the Kullback–Leibler support condition implies posterior consistency in the weak topology, but is not sufficient for consistency in the total variation distance. There is a counter–example. Since then many authors have proposed sufficient conditions for strong consistency; and the aim of the present paper is to introduce new conditions with specific application to nonparametric mixture models with heavy– tailed components, such as the Student- t . The key is a more focused result on sets of densities where if strong consistency fails then it fails on such densities. This allows us to move away from the traditional types of sieves currently employed. Introduction In this paper we consider a novel approach to Bayesian consistency in nonparametric problems, specifically concentrating on mixture models, which are the usual type of nonparametric model used in practice. The first formulation is given by Doob [9]; but this approach has a drawback in infinite dimensional models, see [7,8]. Instead it is commonly assumed that observations are i.i.d. from some fixed but unknown density function, and a general sufficient condition for weak consistency is given in Schwartz [25]. To set the scene, assume that the observations X 1 , . . . , X n are i.i.d. realvalued random variables from a true density p 0 . Let the model L be the space of all Lebesgue densities on (R, R) equipped with the total variation metric, and Π be a prior on (L , L), where R and L are Borel σ-algebras. Formally, for a (pseudo-)metric d on L , the posterior distribution Π(·|X 1 , . . . , X n ) is called to be d-consistent at p 0 if Π(d(p 0 , p) > η|X 1 , . . . , X n ) converges to zero in probability for every η > 0. When d is the total variation (Lévy-Prokhorov, resp.) metric, it is often called strongly (weakly, resp.) consistent. Let K(p, q) = p log(p/q) dμ be the Kullback-Leibler (KL) divergence, where μ is the Lebesgue measure. Schwartz [25] has shown that if p 0 lies in the KL Π p ∈ L : K(p 0 , p) < δ > 0 for every δ > 0, (1.1) then the posterior distribution is weakly consistent at p 0 . Along with the KL support condition (1.1), various sufficient conditions for strong consistency have been studied in infinite-dimensional models, see [3,29,28,6] for general conditions. Some important references concerning specific models and priors are [1,11,13,5]. Further work incorporating convergence rates can be found in [16,12,31,21], for example. Since the total variation is a stronger metric than Lévy-Prokhorov, see [20, p. 34], the KL support condition (1.1) is often insufficient for strong consistency. In this regard, Barron, Schervish and Wasserman [3] constructed a prior satisfying the KL support condition (1.1) but the corresponding posterior distribution is not strongly consistent. Walker, Lijoi and Prünster [30] explained this phenomenon with the notion of data tracking. In this paper we present a new sufficient condition for strong consistency and apply it to nonparametric mixture models. Since the convergence in Lévy-Prokhorov metric is equivalent to weak convergence, once the prior satisfies the KL support condition (1.1), Schwartz's theorem implies that there exists some sequence n ↓ 0 such that Π(d P (p 0 , p) > n |X 1 , . . . , X n ) converges to zero in probability. For strong consistency, therefore, it suffices to show that Π(A n,η |X 1 , . . . , X n ) → 0 in probability for every η > 0, where A n,η = p ∈ L : d P (p 0 , p) ≤ n , d V (p 0 , p) > η . (1. 2) The new approach is based on the fact that A n,η is a collection of "weird" densities in the sense that it consists of highly fluctuating densities with a centering around p 0 . With a reasonable prior, therefore, prior mass imposed on A n,η is negligible, which in turn implies strong consistency. The focus on A n,η allows us to move away from the typical uses of sieves. Our approach is very different from [30], relying on a special property of densities in A n,η . The new approach entails different kinds of sieves avoiding the calculation of Hellinger entropy or prior probabilities of small Hellinger balls. Instead, we require a Lévy-Prokhorov convergence rate ( n ) for which we provide a general sufficient condition. Our new approach significantly simplifies conditions required on the hyperparameter of a Dirichlet process in a mixture model, for example. In particular, a mean parameter can have an arbitrarily heavy tail. We also consider a mixture of Student's t distributions which can be used to model heavy-tailed distributions; the consistency of which is yet to been done in the literature. Notation For p ∈ L , the corresponding probability measure is denoted as P , and vice versa. The expectation of a function f with respect to P is denoted P f, i.e. P f = f (x)dP (x). The expectation under the true distribution is denoted E. Let be the total variation and Hellinger metrics. The indicator function for a set A is denoted 1 A . For two positive sequence (a n ) and (b n ), a n b n represents a n /b n → 0. The maximum of two numbers a and b are denoted a ∨ b. The inequality represents "less than up to a constant multiplication," where the constant is universal (such as 2, π, e) unless specified explicitly. Main results For p ∈ L and γ > 0, define a non-negative function p γ on R as where B γ (x) = {y ∈ R : |y − x| < γ}. Note that p γ = p * U γ , where * denotes the convolution and U γ is the uniform distribution on the interval [−γ, γ]. Therefore, p γ is also a probability density which can be understood as a smoothed version of p, where γ controls the degree of smoothness. For example, suppose p(y) = 2 0 < y < 1/4 or 1/2 < y < 3/4 0 otherwise. Then See Figure 1. For simplicity, (p 0 ) γ is written as p 0,γ . For two probability measures P and Q, let be the Lévy-Prokhorov metric, where A = ∪ x∈A B (x). Note that the convergence in d P is equivalent to weak convergence, and one inequality in the definition of d P can be omitted; see [17]. For a given density p 0 , suppose that a density p is close to p 0 in d P but far away from p 0 in d V . This is only possible when p is a "weird" density in the sense that it highly fluctuates with a centering around p 0 ; as illustrated in Figure 2. It is an important property of such a density that d V (p, p γ ) is large even for small γ. Note that for every fixed p ∈ L , d V (p, p γ ) converges to zero as γ goes to 0 by Lebesgue differentiation theorem and Scheffé's lemma, but never converges uniformly over L due to highly fluctuating densities. Therefore, if the prior probability for large d V (p, p γ ) is sufficiently small, the posterior distribution would be strongly consistent. The key point here is that after excluding weird densities from L , d V (p, p γ ) can be shown to converge uniformly. To be more specific, note that by Scwartz's theorem [25], the KL support condition (1.1) guarantees the existence of a sequence n ↓ 0 such that If d P (p, p 0 ) ≤ n , then for any sequence (γ n ), with γ n → 0 and n /γ n → 0, we have as n → ∞, where the o(1) term depends on n , γ n and p 0 only, see the proof of which is not essential but simplifies the proof. Note that condition (ii) holds if the tail of p 0 is not heavier than that of the Cauchy distribution. Since d V (p, p γn ) = o(1) for every fixed p ∈ L , (L n ) can typically be chosen to increase to L , constituting new sieves. In the existing Bayesian literature, such sieves are required to have bounded entropy [11,3] or satisfy a certain prior summability condition [29]. Instead of these conditions, our requirement is (2.3), which eventually gives A n,η ∩ L n = ∅, where A n,η is defined as (1.2). Note that A n,η decreases to the singleton {p 0 }, while L n grows to the whole set L . As illustrated in the next section, we can easily find (γ n ) and (L n ) satisfying (2.3) in nonparametric mixture models. Note that in Barron's counter-example [3], the prior puts large mass on a set of weird densities such as the one in Figure 2. As a consequence, we cannot choose a sequence of sets (L n ) satisfying (2.3), resulting in posterior inconsistency. It should be emphasized that to prove (2.3), we need to know a Lévy-Prokhorov rate ( n ), which can be interpreted as the "price" for avoiding the construction of complicated sieves. Note that the KL support condition guarantees the existence of "some" rate sequence ( n ). If we do not know what n is, we only know that there exists a sequence γ n such that γ n → 0 and n /γ n → 0. If γ n converges too slowly, however, L n satisfying the second assumption of (2.3) cannot contain sufficiently many densities. As a consequence the posterior probability of L c n might not be sufficiently small. M. Chae and S. G. Walker For a given sequence (δ n ), define a specialized KL ball around p 0 as n is a standard assumption to achieve the posterior convergence rate of at least (δ n ); see for example [12,31]. Let B n = {p ∈ L : d P (p, p 0 ) ≤ n }. Since d P induces the weak topology, there exist a number r > 0 and finite number of bounded continuous functions g 1 , . . . , g k such that Note that the number k of sub-bases and radius r may depend on p 0 and n . The key idea for obtaining the Lévy-Prokhorov rate is to find these numbers. In this context, it is shown in Lemma 5.2 that for every n ↓ 0 with n 4 n → ∞, there exists a sequence of tests (ϕ n ) such that where K > 0 is a universal constant. As a consequence, Π(K n ) ≥ e −nδ 2 n implies (2.1) for every n δ n ∨ n −1/4 . Although n −1/4 might be far away from the optimal rate, it is sufficient for strong consistency in many examples. n for a constant c 1 > 0 and a sequence δ n ↓ 0. Then, (2.1) holds for every sequence see Lemma 8.1 of [12]. Combining this with the previous two theorems, we have the following corollary. Mixture of normal distributions Consider a location mixture of normal distributions where φ σ (x) = φ(x/σ)/σ, φ is the standard normal density and F is a probability measure. A prior Π on L can be constructed by putting independent priors for σ and F . With a slight abuse of notation, we use the notation Π for denoting both a prior for (σ, F ) and a prior for p. For p = p F,σ , it can be shown that see Lemma 5.3. Note that the right hand side of (3.1) depends on p through σ only. Therefore, sieves (L n ) can be constructed independent of F . For a concrete example, we consider an inverse gamma Γ −1 (a 1 , a 2 ) prior for σ 2 , which is standard in both theory and practice, where a 1 , a 2 > 0 are hyperparameters and Γ −1 (a 1 , a 2 ) denotes the inverse gamma distribution whose density is proportional to x → x −a1−1 e −a2/x . Note that the prior on σ 2 puts little mass around zero implying that prior mass for large d V (p, p γ ) with small γ is nearly zero, c.f. (3.1). In most examples Π(K n ) ≥ e −nδ 2 n with δ n much smaller than n −1/4 , so the condition given in Theorem 3.2 is very mild. A natural choice for the prior on F is DP(a 3 , G), where DP(a 3 , G) denotes the Dirichlet process with precision a 3 > 0 and mean G. For the Dirichlet process mixture of normal prior, the prior concentration condition has been extensively studied in literature, see for example [15,26,22,4]. In most existing papers, the true density p 0 is firstly approximated by a finite mixture p with a sufficiently small number N , and then prove that a DP mixture prior puts sufficiently large mass around p * . It should be noted that in the above mentioned papers, the tail of G must be exponentially thin to construct suitable sieves. Lijoi, Prünster and Walker [23] partly resolved this problem using the martingale approach of [29], but it is still required that G has a finite mean. With our approach, however, the only requirement is the prior concentration on K n which holds if the tail of G is not extremely thin, see Proposition 2 in [4] for the most recent result. Therefore, conditions on G can be significantly weakened. For example, the Cauchy and heavier-tailed distributions can be taken for G which are not allowed with any other methods. Mixture of Student's t distributions If the true density p 0 is heavy-tailed, e.g. the tail is of a polynomial order, then it is theoretically unknown that Bayesian procedures based on normal mixtures work well. Practically, there are two possible methods to utilize a Dirichlet process mixture of normal for fitting data generated from a heavytailed distribution. The first one is to use a location-scale mixture. In this regard, Tokdar [27] proved the posterior consistency with a location-scale mixture under mild conditions. His result allows a heavy-tailed distribution such as Cauchy for the true density. Secondly, one may use a heavy-tailed mean parameter G. Unfortunately for both methods, it is challenging to generalize the theoretical results beyond consistency. In particular, existing mathematical tools for getting convergence rates might be difficult to apply because it is rarely possible to find (δ n ) satisfying Π(K n ) ≥ e −nδ 2 n with a heavy-tailed p 0 . We are not aware of whether this is due to the mathematical difficulty or the intrinsic limit of normal mixtures. As an another alternative, we consider a mixture of Student's t distributions. While a mixture of Student's t distributions has been considered in some application, see for example [24,10,18], its asymptotic behavior has not been studied in the literature. In Bayesian analysis, this is due to the technical challenge for constructing suitable sieves with heavy-tailed components. Since the approach given in the present paper avoids the construction of complicated sieves, it can also be applied to Student's t mixtures. Let h be the density of Student's t distribution with v > 0 degrees of freedom, and h σ (x) = σ −1 h(x/σ). For a fixed v, consider a location mixture of the form Similarly as in normal mixtures, for p = p F,σ , we have by Lemma 5.4, where the constant in the inequality depends only on v. A prior Π can be constructed by putting independent priors for σ and F . As in the case of normal mixtures, we can put an inverse gamma prior on σ 2 . We abbreviate the proof of the following two theorems because after replacing (3.1) by (3.2), it is identical to the normal mixture case. We put DP(a 3 , G) prior on F . Although the required condition for the prior concentration is mild, it is technically demanding to prove Π(K n ) ≥ e −cnδ 2 n . We imitate techniques known for normal mixtures. As mentioned earlier, the key part of the proof is the approximation of p 0 , which can be approximated by p F,σ for some (F, σ), a finite mixture of normal distributions. To be a bit more specific, for any probability measure F on a compact interval [−a, a], the total variation between φ σ * F and φ σ * F is small if the first few, say N , moments of F and F are the same, see Lemma 3.1 of [14]. Also, there exists discrete measures F at most N components such that this moment condition is satisfied, see Lemma A.1 of [14]. Since Student's t distribution is a scale mixture of normal distributions [2], we have, see (5.3) for details, where H is Γ(v/2, v/2) distribution. Therefore, by applying the finite approximation technique of continuous normal mixtures, a mixture of Student's t distribution can also be approximated by a finite mixture. Combining with known concentration results for the Dirichlet distribution, we have the following theorem. Although the proof is long and quite similar to [15], we provide full details for the reader's convenience. We note that the main difference from normal mixtures is that a discrete measure F should be constructed independent of the scale parameter, see Lemma 5.9. Theorem 3.5. Put independent Γ −1 (a 1 , a 2 ) and DP(a 3 , G) priors for σ 2 and F , respectively, where v > 4 and G is the standard Cauchy. Suppose that p 0 satisfies (2.4) and twice continuously differentiable with first and second order derivatives p 0 and p 0 . Furthermore, assume that (p 0 /p 0 ) 2 p 0 dμ < ∞, (p 0 /p 0 ) 4 p 0 dμ < ∞ and P 0 ([−x, x]) ≥ 1 − x −β for some β > 4/3 and every large enough x. Then, Π(K n ) ≥ e −nδ 2 n for some δ n n −1/4 . Note that the mean parameter G of the Dirichlet process is assumed to be the standard Cauchy, but it can be replaced by other distribution whose tail is of a polynomial order. Although Theorem 3.5 cannot allow the Cauchy distribution as p 0 due to the tail assumption required for p 0 , it is not difficult to extend the result further with more elaborate proof. For example, if p 0 is smoother than the twice differentiablity condition in Theorem 3.5, refined approximation techniques can be applied to obtain better rates as in [22,4,26]. Discussion The key idea for the proof of Theorem 2.1 lies in the inequality (2.2). This can be extended to the consistency incorporating a rate (η n ). Assume for the moment that the support of p 0 is bounded. To find an upper bound of (2.2), we applied d V (p, p γn ) γ n /σ n , d V (p 0 , p 0,γn ) γ n and d V (p 0,γn , p γn ) n /γ n . By taking γ 2 n n σ n , a rate sequence (η n ) can be chosen as η n n /σ n . However, this rate is far from optimal rate even when n n −1/2 and σ n 1/ log n. Better rates can be obtained if we have a better bounds for d V (p, p γn ), d V (p 0 , p 0,γn ) and d V (p 0,γn , p γn ) (or similar quantities). For example, if p 0 belongs to a β-Hölder class, the bound for d V (p 0 , p 0,γn ) might be improved to d V (p 0 , p 0,γn ) γ β n , as with normal mixtures [22]. We leave this more delicate analysis of rates as future work; and since our approach does not require entropy calculations, we believe that it can eliminate additional log n terms in the existing literature. Proof of Theorem 2.1. It suffices to prove that for every η > 0, A n,η ∩ L n is an empty set for large enough n, where A n,η is defined as (1.2). For every p ∈ A n,η , we have (1). Also, for any M > 0 and p ∈ A n,η , we have The last integral is bounded by 2Mγ −1 n ( n + ξ( n )) by Lemma 5.1. Note that if X and U γn are independent random variables following P 0 and Unif[γ n , γ n ], respectively, the law of X + U γn is equal to P 0,γn . This implies that Therefore, we have Since ξ( n ) n by (2.4), M can be arbitrarily large and n γ n = o(1), the right hand side of (5.1) is bounded below by η/2 for every p ∈ A n,η and large enough n. Since sup p∈Ln d V (p, p γn ) = o(1), we conclude that A n,η ∩ L n is an empty set for large enough n. N , ∞) and B j = (a j−1 , a j ] for j = 1, . . . , N. For δ > 0, define bounded continuous functions ψ j , for j = 1, . . . , N, such that ψ j (x) = 1 for x ∈ [a j−1 + δ, a j − δ], ψ j (x) = 0 for x ≤ a j−1 or x ≥ a j and ψ j is linear on the intervals [a j−1 , a j−1 + δ] and [a j − δ, a j ]. We can choose δ sufficiently small so that and ϕ n = max k ϕ n,k . Since g k is bounded by 1, Hoeffding's inequality [19] implies and for p ∈ U c k , The last display implies If N ≤ n 2 /(16 log 2), we have Since N M / −2 , the desired sequence of tests exists provided that n 4 n is bigger than a universal constant. Proof of Theorem 2.2. Let n be a sequence such that δ n ∨ n −1/4 n 1 and A n = {p ∈ L : d P (p 0 , p) ≥ n }. By Lemma 8.1 of [12] and Lemma 5.2, there exists a constant c 2 > 0 such that P n 0 (Ω n ) → 1, where Ω n is the event that R n (p)dΠ(p) > e −c2nδ 2 n . Also, by Lemma 5.2, there exist a constant K > 0 and a sequence of tests (ϕ n ) satisfying (2.7) for every large enough n. It follows that the proof is complete. Proof of Theorem 3.1. Take a sequence (γ n ) such that n γ n σ n . Then, by Lemma 5.3. Therefore, strong consistency holds by Theorem 2.1. Proof of Theorem 3.2 For a sufficiently slowly diverging (will be described below) sequence (M n ) → ∞, let n = n −1/4 M n and γ n = n M n . If M n grows sufficiently slowly, we can choose a seqeunce (β n ) such that nδ 4 n M 8 n β 4 n and γ n β n 1. for large enough n, where C is a constant depending only on a 1 and a 2 . By the construction of (β n ), we have It follows that Π(L c n ) ≤ e −5nδ 2 n . Also, for any p = p F,σ in L n , we have where the first inequality holds by Lemma 5.3. Therefore, the strong consistency holds by Corollary 2.1. Proof of Theorem 3.5 Throughout this subsection, h is the density of Student's t distribution with v degrees of freedom, h σ (x) = σ −1 h(x/σ), and constants in may depend on v. Proof. Let γ > 0 be given, h σ (x) = ∂h σ (x)/∂x and g σ (x) = sup |y−x|<γ |h σ (y)|. Since where constants in depends only on v. Proof. Note that where the last inequality holds by Lemma 5.6. Also, and the summand in the right hand side is bounded by Combining the last two displays, we have where T z h σ is defined as in Lemma 5.6. Since by Lemma 5.6. . Assume that p 0 is twice continuously differentiable with first and second order derivatives p 0 and p 0 . In both cases, constants c 1 , c 2 depend on v and given integrals only. Proof. By the Taylor expansion with the integral form of the remainder, we have for every x and y. Since p(x) = p 0 (x + σy)h(y)dy, we have Thus, for some constant c 1 > 0, where the last integral is finite for v > 2. The proof for the second inequality is identical to Lemma 4 of [15], for which y 4 h(y)dy < ∞ is required. The following lemma is an extension of Lemma 2 in [15] in the sense that a discrete probability measure F can be taken independent of σ ≥ σ 0 . Lemma 5.9. Let a, σ 0 , > 0 be given numbers such that a/σ 0 ≥ 1. For any probability measure F on [−a, a], there exists a discrete probability measure F on [−a, a] with fewer than Daσ −1 0 log −1 support points, such that Proof. Throughout this proof, p F,σ denotes φ σ * F , not h σ * F . Partition the interval [−a, a] into k disjoint, consecutive subintervals I 1 , . . . , I k of length σ 0 and a final interval I k+1 of length l k+1 smaller than σ, where k is the largest integer less than or equal to 2a/σ 0 . Write F = k+1 i=1 F (I i )F i , where each F i is a probability measure concentrated on I i , then p F,σ = k+1 i=1 F (I i )p Fi,σ . Let Z i be a random variable distributed according to F i , and for a i the left endpoint of I i , let G i be the law of W i = (Z i − a i )/σ 0 . For σ ≥ σ 0 , let G i,σ be the law of W i,σ = W i σ 0 /σ. Thus, G i and G i,σ are supported on [0, 1] and [0, σ 0 /σ] ⊂ [0, 1], respectively. As the proof of Lemma 2 in [15], it can be shown that for each i, there exists a discrete probability measure G i with fewer than N i log −1 support points such that d V (p Gi,1 , p G i ,1 ) (log −1 ) 1/2 . Note that, from the construction, the first N i moments of G i and G i are identical, see the proof of Lemma 3.1 in [14]. Let G i,σ be the law of W i,σ = W i σ 0 /σ, where W i is a random variable distributed according to G i . Then, the first N i moments of G i,σ and G i,σ are also identical, so d V (p Gi,σ,1 , p G i,σ ,1 ) (log −1 ) 1/2 by Lemmas 3.1 and 3.2 in [14]. Let F i be the law of a i + σ 0 W i and set F = and similarly for F i and G i,σ . It follows that Proof. Since d V is bounded by 2, we may assume that > 0 is sufficiently small. Note that h(x) = φ τ (x)dH(τ −2 ), where H is Γ(v/2, v/2) distribution (mean 1 and variance 2/v), see [2]. Thus, h σ (x) = φ στ (x)dH(τ −2 ). Let F be a given probability measure on [−a, a]. Then, for any probability measure F on [−a, a], we have for every large enough t. The right hand side of the last display is bounded by provided that t ≥ 3v −1 log −1 . Thus, the right hand side of (5.3) is bounded by where C > 0 is a constant depending only on v. By Lemma 5.9, there exists a discrete probability measure F , with fewer than
6,780.2
2017-01-01T00:00:00.000
[ "Mathematics" ]
Vehicle Positioning and Speed Estimation Based on Cellular Network Signals for Urban Roads In recent years, cellular floating vehicle data (CFVD) has been a popular traffic information estimation technique to analyze cellular network data and to provide real-time traffic information with higher coverage and lower cost. Therefore, this study proposes vehicle positioning and speed estimation methods to capture CFVD and to track mobile stations (MS) for intelligent transportation systems (ITS). Three features of CFVD, which include the IDs, sequence, and cell dwell time of connected cells from the signals of MS communication, are extracted and analyzed. The feature of sequence can be used to judge urban road direction, and the feature of cell dwell time can be applied to discriminate proximal urban roads. The experiment results show the accuracy of the proposed vehicle positioning method, which is 100% better than other popular machine learning methods (e.g., naive Bayes classification, decision tree, support vector machine, and back-propagation neural network). Furthermore, the accuracy of the proposed method with all features (i.e., the IDs, sequence, and cell dwell time of connected cells) is 83.81% for speed estimation. Therefore, the proposed methods based on CFVD are suitable for detecting the status of urban road traffic. Introduction In the last few years, a technical explosion has revolutionized and supported transportation management and control for intelligent transportation systems (ITS).ITS can estimate and obtain traffic information (e.g., traffic flow, traffic density, and vehicle speed) to road users and managers for the improvement of service levels of the road network.The traffic information can be collected and estimated by three approaches, which include: (1) vehicle detection (VD) [1][2][3]; (2) global positioning system (GPS)-equipped probe car reporting [4][5][6][7]; and (3) cellular floating vehicle data (CFVD) [8].However, vehicle data (VD) has high establishment and maintenance costs.GPS-equipped probe car reporting has a low accuracy rate when the penetration rate of GPS-equipped probe cars is too low.The CFVD can be obtained from mobile phones, which have high penetration in many countries [9], and some studies pointed that CFVD could be used to estimate traffic status with high accuracy [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].Collecting traffic information using CFVD is economic and low cost. For traffic information estimation based on CFVD, some studies proposed methods to analyze the signals of received signal strength indications (RSSIs), handoffs (HOs), call arrivals (CAs), normal location updates (NLUs), periodical location updates (PLUs), routing area updates (RAUs), and tracking area updates (TAUs).These studies illustrated that higher accuracies of traffic information estimation were performed by using CFVD for highways [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].However, these studies assumed that vehicles can be tracked to the correct route, but the determination of the correct route driven by the user of a mobile station (MS) is difficult and has not been investigated, especially for urban roads.Therefore, this study proposes a vehicle positioning method to capture CFVD and to track MSs for ITS.Three features of CFVD, which include the IDs, sequence, and cell dwell time of connected cells from the signals of MS communications, are extracted and analyzed.The feature of sequence can be used to judge urban road direction, and the feature of cell dwell time can be applied to discriminate proximal urban roads.Furthermore, this study proposes a vehicle speed estimation method to analyze these three features of CFVD (e.g., IDs, sequence, and cell dwell time of connected cells) for obtaining the real-time estimated vehicle speed. The rest of this study is organized as follows: the literature reviews of cellular network architecture, CFVD, and traffic information estimation are presented in Section 2; Section 3 proposes a vehicle positioning method based on CFVD to analyze the signals of a mobile phone in a car which is driven on urban roads; a speed estimation method is proposed to measure the speed of the mobile phone in a car according to CFVD in Section 4; the experimental results and discussions are illustrated in Section 5; and Section 6 gives conclusions and discusses future work. Research Background and Related Work In this section, three subsections, which include cellular networks, CFVD, and traffic information estimation, are discussed for the estimation of traffic information based on CFVD. Cellular Networks This subsection describes the signals and interfaces of cellular networks, which include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), and Long-Term Evolution (LTE).For circuit-switching networks, MSs can perform the signals of HOs, CAs, NLUs, and PLUs through the A-interface in GSM and through the IuCS-interface in UMTS.For packet-switching networks, MSs can obtain the signals of RAUs through the Gb-interface in GPRS and through the IuPS-interface in UMTS, and the signals of TAUs can be transmitted between MSs and the core network through the S1-MME-interface in LTE [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].Therefore, a network monitor system can be implemented to capture the cellular network signals via the A-interface, the IuCS-interface, the Gb-interface, the IuPS-interface, and the S1-MME-interface for CFVD. CFVD In recent years, CFVD has been analyzed to estimate traffic flow, traffic density, and vehicle speed in some studies.For instance, the signals of HOs from GSM and UMTS could be used to analyze the cell dwell time in cells and to estimate vehicle speed and travel time [8,11,12,16,25,26,28].Figure 1 shows a case study of CFVD for highway and urban roads.One highway (i.e., Highway 1) and four urban roads (i.e., Urban Road 1, Urban Road 2, Urban Road 3, and Urban Road 4) are covered by three cells (i.e., Cell 1, Cell 2, and Cell 3).When a MS performs a call and moves from Cell 1 to Cell 2, a HO signal is generated and recorded.Moreover, the MS keeps moving from Cell 2 to Cell 3, another HO signal is also generated and recorded.These two HO signals can be analyzed to obtain the cell dwell time of Cell 2. Then the vehicle speed and travel time of Highway 1 can be estimated in accordance with the cell dwell time [8,11,12,16,25,26,28]. Although the previous studies provided high accuracies of traffic information estimation, they focused on highways and assumed that vehicles can be tracked to the correct route.In practical environments, a cell usually covers only one highway, and a cell may cover several urban roads.For instance, Cell 1 covers Highway 1, Urban Road 1, and Urban Road 2. Therefore, the determination of the correct route driven by the MS user is difficult, especially for urban roads. Some studies proposed a route classification method based on vehicular mobility patterns [12,29,30].The route classification method recorded the list of cells which covered a same road.For example, the list of cells for Urban Road 1 in Figure 1 is {Cell 1, Cell 2, and Cell 3}.The method could estimate the similarity of the cell list of a route and the list of connected cells of a MS for determining the route which is driven by the MS user [12,29,30].However, the previous method could not determine the road direction, and the proximal urban roads might lead to lower accuracy of route classification. ISPRS Int.J. Geo-Inf.2016, 5, 181 3 of 13 estimate the similarity of the cell list of a route and the list of connected cells of a MS for determining the route which is driven by the MS user [12,29,30].However, the previous method could not determine the road direction, and the proximal urban roads might lead to lower accuracy of route classification. Figure 1.The case study of CFVD for highway and urban roads. Traffic Information Estimation For traffic information estimation, the amount of HOs and NLUs could be collected and analyzed for traffic flow estimation [8,10,14,17], and the amount of CAs and PLUs could be retrieved and used for traffic density estimation [8,10,14,15].Then the vehicle speed can be estimated in accordance with the estimated traffic flow and the estimated traffic density.Furthermore, some studies proposed mobile positioning methods to measure and analyze RSSIs between the MS and base stations (BSs) to determine the location of the MS [20][21][22][23].The time difference and the distance between two locations of the same MS can be measured for vehicle speed estimation and travel time estimation.The estimated traffic information-based CFVD can be referred and analyzed to develop traffic control strategies for governments. Vehicle Positioning Method A vehicle positioning method is proposed to collect and analyze CFVD (e.g., the IDs, sequence, and cell dwell time of connected cells) from the signals of MS communications (e.g., call arrivals and handoffs) for determining urban road segments which are driven by MS users in their cars.For instance, Figure 2 shows a case study of an urban road network and cell coverage.There are five cells (i.e., Cell1 to Cell5) and three urban road segments (i.e., Road1 to Road3) in this case.When the MS moves and performs handoff signals, the road segments which are driven by the MS user in their car can be tracked according to the IDs, sequence, and cell dwell time of connected cells.In this case, Cell5, Cell4, Cell3, and Cell2 may be connected by a MS when the MS moves through Road1 to Road2; Cell5, Cell4, Cell3, and Cell1 may be connected by a MS when the MS moves through Road1 to Road3. Traffic Information Estimation For traffic information estimation, the amount of HOs and NLUs could be collected and analyzed for traffic flow estimation [8,10,14,17], and the amount of CAs and PLUs could be retrieved and used for traffic density estimation [8,10,14,15].Then the vehicle speed can be estimated in accordance with the estimated traffic flow and the estimated traffic density.Furthermore, some studies proposed mobile positioning methods to measure and analyze RSSIs between the MS and base stations (BSs) to determine the location of the MS [20][21][22][23].The time difference and the distance between two locations of the same MS can be measured for vehicle speed estimation and travel time estimation.The estimated traffic information-based CFVD can be referred and analyzed to develop traffic control strategies for governments. Vehicle Positioning Method A vehicle positioning method is proposed to collect and analyze CFVD (e.g., the IDs, sequence, and cell dwell time of connected cells) from the signals of MS communications (e.g., call arrivals and handoffs) for determining urban road segments which are driven by MS users in their cars.For instance, Figure 2 shows a case study of an urban road network and cell coverage.There are five cells (i.e., Cell 1 to Cell 5 ) and three urban road segments (i.e., Road 1 to Road 3 ) in this case.When the MS moves and performs handoff signals, the road segments which are driven by the MS user in their car can be tracked according to the IDs, sequence, and cell dwell time of connected cells.In this case, Cell 5 , Cell 4 , Cell 3 , and Cell 2 may be connected by a MS when the MS moves through Road 1 to Road 2 ; Cell 5 , Cell 4 , Cell 3 , and Cell 1 may be connected by a MS when the MS moves through Road 1 to Road 3 . ISPRS Int.J. Geo-Inf.2016, 5, 181 3 of 13 estimate the similarity of the cell list of a route and the list of connected cells of a MS for determining the route which is driven by the MS user [12,29,30].However, the previous method could not determine the road direction, and the proximal urban roads might lead to lower accuracy of route classification. Figure 1.The case study of CFVD for highway and urban roads. Traffic Information Estimation For traffic information estimation, the amount of HOs and NLUs could be collected and analyzed for traffic flow estimation [8,10,14,17], and the amount of CAs and PLUs could be retrieved and used for traffic density estimation [8,10,14,15].Then the vehicle speed can be estimated in accordance with the estimated traffic flow and the estimated traffic density.Furthermore, some studies proposed mobile positioning methods to measure and analyze RSSIs between the MS and base stations (BSs) to determine the location of the MS [20][21][22][23].The time difference and the distance between two locations of the same MS can be measured for vehicle speed estimation and travel time estimation.The estimated traffic information-based CFVD can be referred and analyzed to develop traffic control strategies for governments. Vehicle Positioning Method A vehicle positioning method is proposed to collect and analyze CFVD (e.g., the IDs, sequence, and cell dwell time of connected cells) from the signals of MS communications (e.g., call arrivals and handoffs) for determining urban road segments which are driven by MS users in their cars.For instance, Figure 2 shows a case study of an urban road network and cell coverage.There are five cells (i.e., Cell1 to Cell5) and three urban road segments (i.e., Road1 to Road3) in this case.When the MS moves and performs handoff signals, the road segments which are driven by the MS user in their car can be tracked according to the IDs, sequence, and cell dwell time of connected cells.In this case, Cell5, Cell4, Cell3, and Cell2 may be connected by a MS when the MS moves through Road1 to Road2; Cell5, Cell4, Cell3, and Cell1 may be connected by a MS when the MS moves through Road1 to Road3.Therefore, the proposed vehicle positioning method is designed to analyze CFVD and to apply the k-nearest neighbor algorithm (kNN) for determining the location of the vehicle.This method includes four steps (shown in Figure 3) which include: (1) collecting connection and handoff signals from cellular networks; (2) analyzing cell ID, sequence, and cell dwell time of connected cells; (3) retrieving k 1 similar records from a historical dataset; and (4) determining the location of the vehicle.The details of each step are presented in following subsections. ISPRS Int.J. Geo-Inf.2016, 5, 181 4 of 13 Therefore, the proposed vehicle positioning method is designed to analyze CFVD and to apply the k-nearest neighbor algorithm (kNN) for determining the location of the vehicle.This method includes four steps (shown in Figure 3) which include: (1) collecting connection and handoff signals from cellular networks; (2) analyzing cell ID, sequence, and cell dwell time of connected cells; (3) retrieving k1 similar records from a historical dataset; and (4) determining the location of the vehicle.The details of each step are presented in following subsections. Collecting Connection and Handoff Signals from Cellular Networks Step 1 captures and collects the cell IDs and timestamps from cellular network signals (e.g., call arrivals and handoffs) which are obtained by MS and core networks via A and IuCS interfaces.This study applies an international mobile subscriber identity (IMSI) as the ID of the MS for tracking each MS.For instance, a call was performed by IMSI1 at PM 16:08:02 on 18 May 2016, and the cellular network signals during this call were collected and showed in Table 1.When this MS moved from Cell1 to Cell2, a handoff procedure was performed at PM 16:10:35.However, cell oscillation might occur between 16:10:35 and 16:11:07.Then, the MS kept moving and entered the coverage of Cell3, and a handoff signal was generated at PM 16:15:58.Finally, a call complete procedure was performed at 16:18:39.These signals can be captured and used as CFVD for vehicle positioning and speed estimation. Analyzing Cell ID, Sequence, and Cell Dwell Time of Connected Cells Step 2 can analyze the records (i.e., cell IDs and timestamps) from Step 1 and extract three features, which include the cell IDs, sequence, and cell dwell time of connected cells.This study assumes that n cells are available in experimental environments.The extraction processes of each feature are illustrated in the following subsections. Collecting Connection and Handoff Signals from Cellular Networks Step 1 captures and collects the cell IDs and timestamps from cellular network signals (e.g., call arrivals and handoffs) which are obtained by MS and core networks via A and IuCS interfaces.This study applies an international mobile subscriber identity (IMSI) as the ID of the MS for tracking each MS.For instance, a call was performed by IMSI 1 at PM 16:08:02 on 18 May 2016, and the cellular network signals during this call were collected and showed in Table 1.When this MS moved from Cell 1 to Cell 2 , a handoff procedure was performed at PM 16:10:35.However, cell oscillation might occur between 16:10:35 and 16:11:07.Then, the MS kept moving and entered the coverage of Cell 3 , and a handoff signal was generated at PM 16:15:58.Finally, a call complete procedure was performed at 16:18:39.These signals can be captured and used as CFVD for vehicle positioning and speed estimation.Step 2 can analyze the records (i.e., cell IDs and timestamps) from Step 1 and extract three features, which include the cell IDs, sequence, and cell dwell time of connected cells.This study assumes that n cells are available in experimental environments.The extraction processes of each feature are illustrated in the following subsections. Cell ID For the feature analysis of cell ID, this study sets the value of Cell i (c i ) as 1 if Cell i is connected during a call, but otherwise the value of cell is 0. The feature of cell ID, which can be presented as a vector space model (C), is defined in Equation (1).For example, Cell 1 , Cell 2 , and Cell 3 are connected by IMSI 1 in Table 1, so the values of c 1 , c 2 , and c 3 are 1 (shown in Equation ( 2)). Sequence For the judgment of urban road direction, the handoff sequence is an important feature, so this study analyzes the sequence of connected cells for determining the road segment driven by a MS user.When Cell i is firstly connected, the value of Cell i (o i ) is given with a higher weight value.Then the feature of sequence which can be presented as a vector space model (O) is defined in Equation ( 3).Furthermore, this study only considers the first x connected cells, and a vector set of weight values (A) for the feature of sequence is defined in Equation ( 4).For instance, this study set the value of x as 3, and Equation ( 5) is adopted to set the values of A (i.e., a 1 = 1; a 2 = 0.5; a 3 = 0.25).In the case of IMSI 1 in Table 1, Cell 1 is firstly connected, so the value of Cell 1 (o 1 ) is given as 1 (i.e., a 1 ).Then Cell 2 is secondly connected, and the value of Cell 2 (o 2 ) is adopted as 0.5 (i.e., a 2 ).Finally, this study set the value of Cell 3 (o 3 ) as 0.25 (i.e., a 3 ) and the values of other cells as 0 (shown in Equation ( 6)). Cell Dwell Time For the discrimination of proximal urban roads, the cell dwell time is an important feature, so this study analyzes the cell dwell time of each connected cell during the same call.However, cell oscillation may occur, especially in a city.Therefore, the total cell dwell time of each cell is considered and summarized.Then, the feature of cell dwell time, which can be presented as a vector space model (T), is defined in Equation (7).Moreover, this study only considers the first y cells with longer cell dwell time, and a vector set of weight values (B) for the feature of cell dwell time is defined in Equation (8).For example, cell oscillation might occur between 16:10:35 and 16:11:07 in Table 1.Therefore, the total cell dwell time of Cell 1 is 174 s (i.e., 174 = 153 + 21), and the total cell dwell time of Cell 2 is 302 s (i.e., 302 = 11 + 291).Then, the cell dwell time of Cell 3 is 161 s.In this study, the value of y is adopted as 3, and Equation ( 9) is adopted to set the values of B (i.e., b 1 = 1; b 2 = 0.5; b 3 = 0.25).The cell dwell time of Cell 2 is the longest in the case of Table 1, so the value of Cell 2 (t 2 ) is given as 1 (i.e., b 1 ).Then, the values of Cell 3 (t 3 ) and Cell 1 (t 1 ) are adopted as 0.5 (i.e., b 2 ) and 0.25 (i.e., b 3 ), respectively.Finally, this study sets the values of other cells as 0 (shown in Equation ( 10)).T = {t 1 , t 2 , t 3 , t 4 , ..., t n } , where o i = the corresponding weight value of Cell i (7) T = {0.25, 1, 0.5, 0, ..., 0} (10) Retrieving k 1 Similar Records from a Historical Dataset In this study, m calls are transformed in accordance with Equation (11) and stored in a historical database.These m records are defined as historical dataset H (shown in Equation ( 13)).Furthermore, the driven road segment of each historical record is labeled in the database.When a new call is performed and completed, the vector set of this call (r) (shown in Equation ( 14)) is transformed according to Equation ( 11) and compared with each record in historical dataset H by Equation (15).Then the most similar historical record with the distance g 1 can be retrieved in accordance with Equation ( 16), and Step 3 retrieves k 1 similar records from the historical dataset for vehicle positioning. Determining the Location of a Vehicle For the determination of vehicle location, Step 4 applies a majority rule to analyze the k 1 similar records, which include the corresponding driven road segment from Step 3.For instance, a case study of a historical dataset and a new record is given in Table 2.There are five cells (i.e., n = 5) and six historical records (i.e., m = 6), and the value of k 1 is adopted as 3 in this case.Equation ( 15) is used to calculate the distance between dataset r (i.e., a new record) and each historical record.The result shows that the k 1 similar records are h 1 , h 2 , and h 4 , so Road 1 is supported by two records (i.e., h 1 and h 2 ).Therefore, the driven road segment of this new record is determined as Road 1 . Speed Estimation Method This study proposes a method and applies the k-nearest neighbor algorithm to extract the features of CFVD (e.g., the IDs, sequence, and cell dwell time of connected cells) and to estimate vehicle speed.The proposed method includes four steps (shown in Figure 4) which include: (1) determining the location of a vehicle; (2) analyzing cell ID, sequence, and cell dwell time of connected cells; (3) retrieving k 2 similar records with the same road segment from historical dataset; and (4) estimating the speed of a vehicle.The details of each step are presented in following subsections. Speed Estimation Method This study proposes a method and applies the k-nearest neighbor algorithm to extract the features of CFVD (e.g., the IDs, sequence, and cell dwell time of connected cells) and to estimate vehicle speed.The proposed method includes four steps (shown in Figure 4) which include: (1) determining the location of a vehicle; (2) analyzing cell ID, sequence, and cell dwell time of connected cells; (3) retrieving k2 similar records with the same road segment from historical dataset; and (4) estimating the speed of a vehicle.The details of each step are presented in following subsections. Determining the Location of Vehicle Step 1 determines the driven road segment of the MS in accordance with CFVD and the proposed vehicle positioning method in Section 3.This study only considers and analyzes the historical records with the same road segment to estimate vehicle speed.For example, when a new record is determined as Roadl, the historical records with Roadl are considered in the following steps. Analyzing Cell ID, Sequence, and Cell Dwell Time of Connected Cells Step 2 adopts Equations ( 1), (3), and (7) to extract the features of historical records and new records which include the IDs, sequence, and cell dwell time of connected cells.Each record can be transformed as a vector space model (shown in Equation ( 11)).Historical records are presented as a vector set H, and a new record is presented as a vector set r in accordance with Equations ( 13) and ( 14). Determining the Location of Vehicle Step 1 determines the driven road segment of the MS in accordance with CFVD and the proposed vehicle positioning method in Section 3.This study only considers and analyzes the historical records with the same road segment to estimate vehicle speed.For example, when a new record is determined as Road l , the historical records with Road l are considered in the following steps. Analyzing Cell ID, Sequence, and Cell Dwell Time of Connected Cells Step 2 adopts Equations ( 1), ( 3) and (7) to extract the features of historical records and new records which include the IDs, sequence, and cell dwell time of connected cells.Each record can be transformed as a vector space model (shown in Equation ( 11)).Historical records are presented as a vector set H, and a new record is presented as a vector set r in accordance with Equations ( 13) and ( 14). Retrieving k 2 Similar Records with the Same Road from Historical Dataset Step 3 retrieves k 2 similar records with the same road segment from a historical dataset according to Equation (15).Furthermore, the vehicle speed of each historical record is labeled in a database.For instance, in the case of Table 2, the new record r is determined as Road 1 , so three historical records (i.e., h 1 , h 2 , and h 3 ) are considered to be analyzed for vehicle speed estimation.If the value of k 2 is adopted as 2 in this case, the records h 1 and h 2 are retrieved as the k 2 similar records. Estimating the Speed of a Vehicle Step 4 applies a weighted mean method to analyze the k 2 similar records for vehicle speed estimation.In this study, new record r is determined as Road l , and the distance between this record and the more similar record with vehicle speed v 1 is defined as p 1 in Equation (17).Moreover, the distance between this record and the j-th most similar record with vehicle speed v j is defined as p j .Then the vehicle speed of this record is estimated as u by Equation (18).For example, the k 2 similar records are h 1 and h 2 in Table 2 when the value of k 2 is 2. The value of d(r, h 1 ) is 0 (i.e., p 1 = 0), and the value of d(r, h 2 ) is about 0.707 (i.e., p 2 = 0).Then, Equation ( 18) is adopted to estimate the vehicle speed of the new record r as 60 km/h (shown in Equation ( 19)).p 1 = mind (r, h i ) where the driven road segment of h i is Road l (17) where where where ω 1 = 0.707−0 0.707−0 = 1 and ω i = 0.707−0.7070.707−0 = 0 = 60 (19) Experimental Results and Discussions The collection of CFVD and the information of urban road networks are presented in Section 5.1.The collected CFVD is used to evaluate the proposed vehicle positioning method and speed estimation method in Sections 5.2 and 5.3, respectively. Experimental Environment In experimental environments, a MS (e.g., HTC (Taoyuan, Taiwan) M8 running the Android 2.2.2platform) is carried in a car to perform call procedures when the car is driven on urban roads, and the cellular network signals of these calls can be captured for the collection of CFVD.Six urban road segments in Kaohsiung and Pingtung in Taiwan (shown in Figure 5) are driven in 27 runs.There are 64 different base stations (BSs) (i.e., n = 64) detected on these road segments in Taiwan. For the evaluations of the vehicle positioning method and speed estimation method, some popular machine learning methods (e.g., kNN, naive Bayes classification (NB), decision tree (DT), support vector machine (SVM), and back-propagation neural network (BPNN) [31,32]), are implemented and compared by using the R language [33,34] and Rstudio [35] to analyze collected CFVD in experiments.This study uses the packages of class [36], e1071 [37], party [38], and neuralnet [39] to implement kNN, NB, DT, SVM, and BPNN algorithms, respectively.Furthermore, the k-fold cross-validation method [31,32] is used to analyze each test run.In the i-th iteration, the data of the i-th run is selected as the test corpus, and the other test runs are collectively used to be training data for performance analyses. support vector machine (SVM), and back-propagation neural network (BPNN) [31,32]), are implemented and compared by using the R language [33,34] and Rstudio [35] to analyze collected CFVD in experiments.This study uses the packages of class [36], e1071 [37], party [38], and neuralnet [39] to implement kNN, NB, DT, SVM, and BPNN algorithms, respectively.Furthermore, the k-fold cross-validation method [31,32] is used to analyze each test run.In the i-th iteration, the data of the i-th run is selected as the test corpus, and the other test runs are collectively used to be training data for performance analyses. The Evaluation of Vehicle Positioning Method For the evaluation of the vehicle positioning method, this study considers different features and machine learning methods to analyze CFVD.Considering cell ID and kNN first; it can be observed that its performance of vehicle positioning is 51.85% (shown in Table 3).The cause of several errors is direction misjudgment when only the feature of cell ID is considered.Then, the features of cell ID and sequence are considered for the judgment of urban road direction, and the results show that the accuracy of the vehicle positioning method is improved to 92.59%.However, some proximal urban roads cannot be discriminated by using the features of cell ID and sequence.Finally, this study analyzes all features (i.e., the IDs, sequence, and cell dwell time of connected cells) to determine the driven road segment of the MS user, and the accuracy can be improved to 100%.Therefore, the feature of cell dwell time can support for the discrimination of proximal urban roads. The Evaluation of Vehicle Positioning Method For the evaluation of the vehicle positioning method, this study considers different features and machine learning methods to analyze CFVD.Considering cell ID and kNN first; it can be observed that its performance of vehicle positioning is 51.85% (shown in Table 3).The cause of several errors is direction misjudgment when only the feature of cell ID is considered.Then, the features of cell ID and sequence are considered for the judgment of urban road direction, and the results show that the accuracy of the vehicle positioning method is improved to 92.59%.However, some proximal urban roads cannot be discriminated by using the features of cell ID and sequence.Finally, this study analyzes all features (i.e., the IDs, sequence, and cell dwell time of connected cells) to determine the driven road segment of the MS user, and the accuracy can be improved to 100%.Therefore, the feature of cell dwell time can support for the discrimination of proximal urban roads. Table 3.The comparisons of the proposed method with different features for vehicle positioning. Feature Accuracy Cell ID (Previous method [12,29]) 51.85% Cell ID and sequence 92.59% Cell ID and cell dwell time 88.89% Cell ID, sequence, and cell dwell time 100% For the comparisons of different machine learning methods, all features are considered and analyzed to determine the driven road segment.Four factors, which include precision, recall, F 1 − measure (shown in Equation ( 20)), and accuracy are used to evaluate the performance of each method.Table 4 shows that the performance of the proposed method is higher than other methods.Therefore, this study proposes vehicle positioning and speed estimation methods to capture CFVD and to track MSs for intelligent transportation systems.Three features of CFVD, which include the IDs, sequence, and cell dwell time of connected cells from the signals of MS communications, are extracted and analyzed.The feature of sequence can be used to judge the urban road direction, and the feature of cell dwell time can be applied to discriminate proximal urban roads.The experiment results show that the accuracy of the proposed vehicle positioning method is better than other popular machine learning methods (e.g., NB, DT, SVM, and BPNN).Furthermore, the accuracy of the proposed method with all features (i.e., the IDs, sequence, and cell dwell time of connected cells) is 83.81% for speed estimation. However, cell oscillation problems may disturb the cell dwell time of each cell and vehicle speed estimation.This study summarizes the total cell dwell time of each cell to solve these problems, but these problems may occur in accordance with some environment factors.Therefore, the environmental factors may be analyzed to filter out cell oscillation in future work. Figure 2 . Figure 2. The case study of an urban road network and cell coverage. Figure 2 . Figure 2. The case study of an urban road network and cell coverage. 1 Figure 2 . Figure 2. The case study of an urban road network and cell coverage. Figure 3 . Figure 3.The steps of vehicle positioning method. Figure 3 . Figure 3.The steps of vehicle positioning method. Figure 4 . Figure 4.The steps of speed estimation method. Figure 4 . Figure 4.The steps of speed estimation method. Figure 5 . Figure 5.The urban road segments in the experimental environment. Figure 5 . Figure 5.The urban road segments in the experimental environment. Road 2 Road 3 Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Road 1 Figure 1. The case study of CFVD for highway and urban roads. Table 1 . The cellular network signals during a call performed by IMSI1 on 18 May 2016. Table 1 . The cellular network signals during a call performed by IMSI 1 on 18 May 2016. Table 2 . A case study of historical dataset and a new record.
7,849.8
2016-10-02T00:00:00.000
[ "Computer Science" ]
Molecular Detection of the Harmful Raphidophyte Chattonella subsalsa Biecheler by Whole-Cell Fluorescence in-situ Hybridisation Assay Species of the genus Chattonella (Raphidophyceae) are a group of marine protists that are commonly found in coastal waters. Some are known as harmful microalgae that form noxious blooms and cause massive fish mortality in finfish aquaculture. In Malaysia, blooms of Chattonella have been recorded since the 1980s in the Johor Strait. In this study, two strains of Chattonella were established from the strait, and morphological examination revealed characteristics resembling Chattonella subsalsa. The molecular characterization further confirmed the species’ identity as C. subsalsa. To precisely detect the cells of C. subsalsa in the environment, a whole-cell fluorescence in-situ hybridisation (FISH) assay was developed. The species-specific oligonucleotide probes were designed in silico based on the nucleotide sequences of the large subunit (LSU) and internal transcribed spacer 2 (ITS2) of the ribosomal DNA (rDNA). The best candidate signature regions in the LSU-rRNA and ITS2-rDNA were selected based on hybridisation efficiency and probe parameters. The probes were synthesised as biotinylated probes and tested by tyramide signal amplification with FISH (FISH-TSA). The results showed the specificity of the probes toward the target cells. FISH-TSA has been proven to be a potential tool in the detection of harmful algae in the environment and could be applied to the harmful algal monitoring program. INTRODUCTION Harmful algal bloom (HAB), also known as "red tide", occurs when harmful microalgae grow in high biomass in the water column, causing severe consequences such as food poisoning syndromes in humans who consume algal toxinscontaminated seafood and massive mortality of marine organisms (Hoagland et al. 2002). Paralytic shellfish poisoning has been the focus of attention in Malaysia, as it has been linked to the majority of human intoxication cases Usup et al. 2012;Yñiguez et al. 2021). Several causative dinoflagellates, including Pryodinium bahamense Plate, Alexandrium tamiyavanichii Balech, A. minutum Halim, and Gymnodinium catenatum Graham, have been documented throughout the Malaysian waters (Leaw et al. 2005;Lim et al. 2007). Nonetheless, other algalrelated incidents have been documented in Malaysia, such as massive fish kills in aquaculture farms 2014;Teng et al. 2016;Yñiguez et al. 2021;Lum et al. 2021). The majority of these events have been linked to marine harmful dinoflagellates, such as Margalefidinium polykrikoides (Margalef) Gómez, Richlen, and Anderson, Noctiluca scintillans (Macartney), Kofoid & Swezy, and Karlodinium australe Salas, Bolch, and Hallegraeff (Lim et al. 2014;Teng et al. 2016). Among the harmful microalgae, several groups of raphidophytes have been recognized as harmful to marine organisms (Lum et al. 2021). Members of the genus Chattonella Biecheler are among those that have caused severe damage to the aquaculture industries in many coastal countries, for instance Japan (Okaichi 2003;Imai & Yamaguchi 2012). The first record of Chattonella bloom has been reported on the Malabar Coast, India, while the most severe fish kill event was recorded in Harima-Nada, the Seto Inland Sea, Japan, in the summer of 1972 (Imai & Yamaguchi 2012). In Malaysia, the occurrence was first documented in 1983 along the Johor strait (Maclean 1989). Conventionally, light microscopy has been used to identify the morphological characteristics of Chattonella species. The species are unicellular, bi-flagellated, and pigmented, ranging from golden brown to greenish in some species depending on the fucoxanthin content (Klöpper et al. 2013). In general, species of Chattonella are differentiated based on cell size, cell shape, presence of a hyaline posterior tail, and mucocysts (Hara & Chihara 1982;Hara et al. 1994;Bowers et al. 2006). However, diversity in the morphology of Chattonella is high, even within the same species. Often, molecular characterization using gene markers such as ribosomal RNA genes (rDNA) is required to aid species recognition (Bowers et al. 2006;Demura et al. 2009). Among the species of Chattonella, C. antiqua (Hada) Ono, C. marina (Subrahmanyan) Hara and Chihara, C. ovata Hara and Chihara (also referred to as C. marina complex sensu Demura et al. 2009), and C. subsalsa Biecheler have been reported to cause HABs that are associated with massive farmed-fish mortality and impact the economies of affected countries worldwide (Hiroishi et al. 2005;Edvardsen & Imai 2006;Imai et al. 2006;Imai & Yamaguchi 2012;Lum et al. 2021). In the Johor Strait shared between Malaysia and Singapore, the occurrence of Chattonella has often been reported from the monitoring and research studies of both countries (Khoo & Wee 1997;Leong et al. 2015;Tan et al. 2016;Kok et al. 2019;Liow et al. 2019). Morphological plasticity in the species, however, has hampered precise species recognition, particularly in the preserved environmental samples, where the cells tend to deform and the morphology deteriorates after fixation. Morphological plasticity in the species, however, has hampered precise species recognition, particularly in the preserved environmental samples, where the cells tend to deform and the morphology deteriorates after fixation (Katano et al. 2009). This often leads to species misidentification. Alternative approaches, such as molecular techniques (Bower et al. 2006;Stacca et al. 2016), could therefore be explored to overcome the limitation. In this study, whole-cell tyramide signal amplification-fluorescence in situ hybridisation (FISH-TSA) was developed to detect the harmful raphidophyte Chattonella subsalsa. The ribosomal RNAtargeted species-specific probes were designed in silico and applied in the assay. Algal Cultures and Morphological Observation Live plankton samples were collected from the Johor Strait using a 20 µm-mesh plankton net and vertically hauled into subsurface seawater (< 5 m) during high tide. The micropipette technique was used to isolate the targeted cells. Cultures were established and grown in f/2 medium. Live plankton samples were collected from the Johor Strait using a 20 µm-mesh plankton net and vertically hauled into subsurface seawater (< 5 m) during high tide. The micropipette technique was used to isolate the targeted cells. Cultures were established and grown in f/2 medium (Guillard & Ryther 1962) with a salinity of 30, 25 ± 0.5°C, under a light intensity of 100 µmol photons m -2 s -1 , with a 12: 12 h light: dark photoperiod. Morphological observation of cell shape and chloroplast was performed using an Olympus IX51 research microscope (Olympus, Tokyo, Japan). To observe the nuclear position, cells were first stained with the DAPI-nuclei stain and then examined under ultraviolet light with a UV filter set. Digital images were captured with an Olympus DP72 digital camera (Olympus, Tokyo, Japan). Genomic DNA Extraction, rDNA Amplification and Sequencing The genomic DNA of Chattonella cultures was extracted as described by Leaw et al. (2010). In brief, the mid-exponential cells from 200 mL of cultures were harvested by centrifugation (1100 ×g, one min). The cell pellets were rinsed with ddH 2 O and resuspended in 10× NET lysis buffer (5 M NaCl, 0.5 M EDTA, 1 M Tris-HCl, pH 8) and 1% sodium dodecyl sulphate. The mixture was incubated at 65°C and subsequently extracted with chloroform: isoamyl alcohol (24:1) and phenol: chloroform: isoamyl alcohol (25:24:1). The genomic DNA was then precipitated by adding absolute ethanol and 3 M sodium acetate (pH 5). The DNA pellet was then rinsed with cold 70% ethanol. Finally, the DNA pellet was dissolved in 30 µL of TE buffer (10 mM Tris-HCl, pH 7.4; and 1 mM EDTA, pH 8) and stored at -20°C until further analysis. Phylogenetic Analyses Taxon sampling was performed by retrieving the LSU and ITS-rDNA nucleotide sequences of Chattonella species from the NCBI GenBank nucleotide database ( Table 1). The sequences of Heterosigma akashiwo were used as an outgroup. The newly obtained C. subsalsa sequences in this study and the retrieved sequences were multiple aligned using the program MUSCLE (https://www.ebi.ac.uk/Tools/ msa/muscle/). The aligned datasets were phylogenetic inferred using Phylogenetic Analysis Using Parsimony* (PAUP*) v4.0 b10 (Swofford 2003) and MrBayes v3.1.2 (Huelsenbeck & Ronquist 2001), as described in Leaw et al. (2016). In silico rRNA-Targeted Oligonucleotide Probe Design The rDNA sequences of Chattonella species retrieved from GenBank and SILVA (http://www.arb-silva.de/) public databases were used to identify potential signature regions by using the PROBE_DESIGN tool of the ARB programme package (Ludwig et al. 2004). The parameters for probe design included probe length, percentage of GC content, melting temperature (T m ), and self-complementary (Kumar et al. 2005). The probe candidates were selected for both target and probe sequences and were displayed in a result list (Kumar et al. 2005; Tables 2 and 3). The selected probe candidates were then evaluated using the PROBE Match Tool (PMT) of the ARB. The oligonucleotide sequences were then subjected to extensive specificity tests through BLAST comparisons against nucleotide databases of non-target sequences. The candidate sequences that complemented the region of target sequences with at least one mismatch in other non-target sequences were chosen (Hugenholtz et al. 2002). BLAST was also used to confirm that the sequences were transcribed in the correct orientation (Hugenholtz et al. 2002). The selected probes satisfying the in silico experimental constraints were then synthesised as a biotinylated probe (IDT Inc., Singapore). Tyramide Signal Amplification-fluorescence in situ Hybridisation Cells were fixed with Lugol iodine solution (~1%) and transferred to a glass slide that was pre-fixed with 2% HistoGrip TM (Invitrogen, Life Technologies, USA) (Breininger & Baskin 2000). The fixed cells were air-dried and later rinsed twice with 5× SET hybridisation buffer (10% Nonidet) and allowed to stand in the buffer for 3 min (Chen et al. 2008). Then, the probe was added to the slide containing the cells. The slide was incubated in a dry bath at 58°C for 30 min. The slide was washed twice with a 5 SET buffer after incubation. Following that, 1% blocking reagent was added and incubated at room temperature for 30 min, horseradish peroxidase (HRP) solution was added to the slide and incubated at room temperature for 30 min. The glass slide was then washed with phosphate buffered saline (PBS) that was pre-heated to 37°C. The tyramide working solution (TSA kit with Alexa Fluor® 488 Tyramide; Molecular Probe®, Life Technologies, USA) was then added to the slide in the dark and incubated at room temperature for 10 min. The slide was rinsed again in PBS to remove any excess tyramide working solution. The universal UniC probe (positive control) (5´-/5Biosg/ GWA TTA CCG CGG CKG CTG-3´) and UniR probe (negative control) (5´-/5Biosg/ CAG CMG CCG CGG TAA TWG-3´) were used as controls (Lebaron et al. 1997). The slides were then observed under an Olympus IX51 microscope equipped with a filter set (470 nm-490 nm excitation and 510 nm-550 nm emission) under UV light. Digital images were captured with an Olympus DP72 digital camera (Olympus). Species Identification Two strains of C. subsalsa from the Johor Strait were established and used in this study. Cells of the two strains showed similar morphology, with cell dimensions of 36.6 ± 2.9 µm long and 20.5 ± 4.5 µm wide (n = 50). Under LM, cells are oval, pear-like in shape, which is similar to other C. subsalsa reported previously (Fig. 1). There are two sub-equal, hetero-dynamic flagella at the anterior of the cells (Fig. 1A). The flagella can only be observed in the living cells. The cells contain many golden-brown chloroplasts, which appear barrel-shaped (Fig. 1B). The nucleus is large and appears oval in shape; it is located in the middle of the cell (Fig. 1D). Phylogenetic Inferences of LSU and ITS rDNA A total of 23 LSU rDNA sequences and 30 sequences of the ITS of Chattonella were retrieved from the GenBank nucleotide database. Both LSU and ITS rDNA datasets yielded identical tree topologies for maximum parsimony (MP), maximum likelihood (ML), and Bayesian inference (BI); the BI tree is shown in Fig. 2. The trees revealed two monophyletic clades with strong support values (MP/ML/BI, 100/100/1); one clade comprised species in the C. marina complex: C. marina var. antiqua, C. marina var. marina, C. minima, and C. marina var. ovata, while the other clade comprised only taxa from C. subsalsa. The two C. subsalsa strains (CtSg01 and CtSg02) in this study were grouped with other C. subsalsa strains and formed a distinct clade that separated from the strains of C. marina complex, according to both LSU and ITS phylogenetic trees. The strains of C. subsalsa obtained in this study are in boldface. Species-Specific Oligonucleotide Probes of Chattonella subsalsa LSU rRNA signature region and probe In the first run, a total of 21 candidate sequences of the potential signature regions in the LSU rDNA of C. subsalsa were detected at a 730-nucleotide length (Table 2). At least one mismatch was found between the related species, such as C. marina var. antiqua and C. marina. The probes selected in silico by ARB contained 18 bases, with GC contents in the range of 50% to 70%. Several of them showed the Gibb energy (ΔG°) greater than -14 kcal/mol, indicative of secondary structure formation (Table 2). A confirmatory test of the probe specificity was performed by blasting in the nucleotide database. The blastn results showed that the probes selected were not specific to C. subsalsa, where the probes matched diatom species with 100% coverage and 100% identity. Therefore, a second attempt at in silico analysis was performed with a slight modification of the signature regions. A total of seven candidate sequences were chosen (Table 3). The length of the probes was in the range of 19 bases to 23 bases, longer than the first run, to ensure the presence of GC complementary pairs at the start and end of the probe sequences. Subsequently, the parameters of the probes were determined, and the specificity of the probes was evaluated through blastn search. Out of the seven probe candidates, Probe Set 7 (5´-GGG GAA UCC GGG UUG GUU UC-3´) was selected (Fig. 3) based on its high GC content (60%), lowest Gibb energy (∆G° = -20.2 kcal/mol), and lower melting point (58.6°C) in contrast to other probes (Table 3). The sequence was further synthesised as a biotinylated probe to perform the FISH assay in the later analysis. According to the Probe Nomenclature, the probe was designated as L-S-C.sub-0039-a-A-20 (Alm et al. 1996). ITS2 rRNA signature region and probe The ITS2 region of the rDNA was used to design a species-specific probe as it is more specific at the species level than the LSU rDNA. In this study, ten candidate sequences of C. subsalsa were determined from a 262-bp-long ITS2-rDNA complete sequence; the sequences that are expected to identify the target are listed in Table 4. The candidate sequence length was in the range of 18 bases to 21 bases. These candidate sequences were then subjected to specificity analysis by performing BLAST comparisons against the nucleotide databases, and the results showed that there was no match to other non-target species. Among the ten candidate sequences (Table 4), probe set 10 (5´-TGG AGA TCT GAA CAG TGA GG-3´) was chosen because it exhibited a lower ∆G°, which is -16.7 kcal/mol, comprised of 52.4% of the GC pair with a 100% hybridisation efficiency. Most importantly, the probe is unique to C. subsalsa, and a total of six mismatches were found in the sequence when compared to other non-target species (Fig. 3B). This ITS2 probe was designated as I-S-C.sub-0219-a-A-21 and synthesised as a biotinylated probe for later hybridisation experiments. Tyramide signal amplification-fluorescence in situ hybridisation (FISH-TSA) The FISH-TSA assay with the biotinylated-labelled probes was tested on the clonal cultures of C. subsalsa. The species Heterosigma akashiwo was used as the nontarget species. When treated with the positive-control eukaryotic-universal UniC probe, the hybridised cells of C. subsalsa and H. akashiwo showed bright green fluorescence signals (Fig. 4). Lime-green fluorescence signals were observed when C. subsalsa cells were hybridised with C. subsalsa LSU-rRNA and ITS-rDNA species-specific probes (Fig. 4). In contrast, when the cells were treated with the negative-control UniR probe, they showed chartreuse-yellow fluorescence with low intensity (Fig. 4). When the C. subsalsa species-specific probes were tested on H. akashiwo cells, chartreuse-yellow fluorescent signals were observed, indicating negative results (Fig. 5). DISCUSSION In this study, two species-specific oligonucleotide probes in the LSU-rRNA and ITS2-rDNA were developed to detect the harmful raphidophyte Chattonella subsalsa. The probes were applied in the assay of whole-cell fluorescence in situ hybridisation (FISH) for species detection. The region of the LSU-rRNA gene was chosen to owe to have a universally conserved region while exhibiting some taxonspecific variable regions (Amann & Ludwig 2000). However, the results of the specificity analysis on the LSU-rRNA selected sequences showed cross-identity with other Chattonella species and diatom species. Therefore, a more taxonspecific rDNA region, the ITS2-rDNA, has been selected to design the speciesspecific probe of C. subsalsa. The biotinylated probes developed in this study have been tested on the C. subsalsa cells through the assay of FISH-TSA. The technique of FISH has been widely used in identifying HAB species such as Pseudo-nitzschia spp., Alexandrium spp., and Karenia brevis (Davis) Hansen & Moestrup (Miller & Scholin 1998, Chen et al. 2008. The method, however, has been shown to exhibit less sensitivity when observed under an epi-fluorescence microscope (Lecuyer et al. 2008). The efficiency of FISH, therefore, has been improved by tyramide signal amplification (TSA) to obtain a better resolution in the FISH application (Lecuyer et al. 2008). FISH-TSA is a protocol that enables detection with a very small probe by signal amplification (Schriml et al. 1999). The biotinylated probes have been designed to achieve the enzymatic action of HRP as they provided strong enzymatically amplified signals and improved the resolution (Kerstens et al. 1995). In this study, both LSU-rRNA and ITS2-rDNA probes of C. subsalsa exhibited positive green fluorescent signals when hybridised into the cells of C. subsalsa. Generally, the ITS2-rDNA probe does not give whole-cell fluorescence as it was only hybridised to the nucleus of the cells. However, cells of C. subsalsa that were applied with the ITS2-rDNA probe showed almost whole-cell fluorescence owing to its large nucleus as shown in Fig. 1. To confirm the specificity of the probes, both C. subsalsa species-specific probes were tested with the non-target species H. akashiwo. The results showed that H. akashiwo showed light-yellow fluorescence when tested with the ITS2-rDNA probe, like the negative control. This showed that the ITS2-rDNA probe was specific only to C. subsalsa. But when tested with the LSU-rRNA probe, it showed yellow-green fluorescence that made it difficult to evaluate if the result was positive or negative. It is thus suggested that the ITS2-rDNA probe is better than the LSU-rRNA probe in detecting C. subsalsa. The assay of FISH-TSA was applied to microscope glass slides throughout the study. This method has been previously described by Chen et al. (2008) as applied to H. akashiwo cells. The cell harvesting procedures such as centrifugation and filtration that were previously applied to the armoured dinophyte Alexandrium and the diatom Pseudo-nitzschia (Miller & Scholin 1998) were less suitable in this case as cells tend to burst when undergoing centrifugation or filtration. Several factors affect the efficiency of FISH-TSA. Physiological growth conditions of the cells are among the factors that affect FISH-TSA detection , Chen et al. 2008. Kim et al. (2004) discovered that exponentially growing cells have higher fluorescent intensities than stationary phase cells. The low fluorescent intensity of the cells was likely due to the decreasing rRNA content in stationary-phase cells (Anderson et al. 1999). CONCLUSION To conclude, the species-specific oligonucleotide probe of C. subsalsa was successfully designed in the ITS2-rDNA region. The results of this study revealed that the ITS2 probe was more specific as compared to the LSU probe. The strong fluorescent signal in FISH-TSA also proves its efficiency in detecting harmful algal species from environmental samples. Future field applications should be carried out to further evaluate the feasibility of this assay for HAB monitoring purposes.
4,536
2023-03-01T00:00:00.000
[ "Environmental Science", "Biology" ]
An Approximate Algorithm for Robust Adaptive Beamforming This paper presents an adaptive weight computation algorithm for a robust array antenna based on the sample matrix inversion technique. The adaptive array minimizes the mean output power under the constraint that the mean square deviation between the desired and actual responses satisfies a certain magnitude bound. The Lagrange multiplier method is used to solve the constrained minimization problem. An e ffi cient and accurate approximation is then used to derive the fast and recursive computation algorithm. Several simulation results are presented to support the e ff ectiveness of the proposed adaptive computation algorithm. INTRODUCTION The directionally constrained minimization of power (DCMP) adaptive array adjusts the array weights to minimize the mean output power while keeping the antenna response to the direction of arrival (DOA) of the desired signal [1,2]. When the true DOA is known a priori, the DCMP array achieves a good performance. More precisely, the array provides spatial filtering that maximizes the radar's sensitivity in the desired direction while suppressing interference signals coming from other directions and measurement noises. However, if there is a mismatch between the prescribed and actual DOAs, the desired signal is viewed as an interference and then suppressed [3]. Even a small mismatch may cause a significant performance degradation. For the solution, a number of robust array antennas that impose the directional derivative constraints [4,5,6,7,8,9], the inequality directional constraints [10,11,12,13], and the mean-square deviation constraints [14,15,16] have been developed. These methods succeed in achieving flat main beam magnitude responses and decreasing the array sensitivity to look-direction errors. However, the adaptive weight computation algorithm to solve the constrained minimization problem at each time step is not provided, which is required to follow changing interference environment. Although some adaptive algorithms were presented in [6,7,10], they were derived based on the steepest descent technique and therefore exhibit slower convergence than the sample matrix inversion (SMI) technique [17,18]. We here consider the robust array antenna with the inequality directional constraints [10,11,12,13]. The robust array antenna is designed so that the mean output power is minimized under the constraint that the mean square deviation between the desired and actual responses satisfies a certain magnitude bound. The constrained minimization problem can be solved by using the Lagrange multiplier method. However, when the interference environment changes with time, we have to find a root of a nonlinear equation at each time step, which is computationally expensive. We thus apply second-order Taylor series approximations to the nonlinear equation to obtain the closed-form solution, and then derive an adaptive weight computation algorithm based on the SMI technique. The derived adaptive algorithm recursively compute the weight vector in O(N 2 ) computation time at each time step, where N is the number of array elements. Several simulation results are performed to show the effectiveness of the proposed adaptive computation algorithm. DCMP ARRAY ANTENNA Consider a narrowband adaptive array antenna of N sensors. We define the kth array input at a discrete time t as x k,t and the kth weight as w k . We further define the array input vector and the weight vector as x t = (x 1,t , x 2,t , . . . , x N,t ) T and w = (w 1 , w 2 , . . . , w N ) T , respectively, where "T" denotes the transpose operator. The array output is then given by where "H" denotes the complex conjugate transpose. Consider a desired sinusoidal signal with a DOA θ d . Putting the phase shift at the kth input as Φ k (θ d ), the constraint of the DCMP array is formulated as where c is the constraint vector defined by c H = (e − jΦ1(θd) , e − jΦ2(θd) , . . . , e − jΦN (θd) ) and h is the desired response. Although we here treat a single constraint, the extension to multiple (L) direction constraints is possible by replacing c by the When the DOA θ d is given, the DCMP array determines the weight vector w so that the mean output power E[(y t ) 2 ] is minimized subject to the constraint (2), where E[·] denotes the expectation operator. Using the Lagrange multiplier method, the solution to the linearly constrained minimization problem is obtained by [1,2] where R is the covariance matrix of x t , defined by R = E[x t x H t ]. Adaptive weight estimation algorithms to follow changing interference environment have been derived based on the SD and SMI techniques [1,17]. Constrained minimization problem The use of the equality constraint (2) causes performance degradation in the presence of look-direction errors. For the solution, a robust array antenna, which minimizes the mean output power under the constraint that the mean square deviation between the desired and actual responses satisfies a certain magnitude bound, has been proposed [14,15,16]. This is formulated as where ε and ∆ are small positive constants representing the severity of the constraint and the angle width considered in the constraint, respectively. While the equality constraint (2) restricts the output response to h only at the angle θ d , the inequality constraint (5) makes the response close (in a least squares sense) to h in the angle range [θ d − ∆, θ d +∆]. The resulting array therefore has robustness against look-direction errors. The inequality constraint must be an active equality constraint. If the constraint is not active, the solution to the optimization problem becomes w = 0, which does not make sense. Hence we replace (5) by the equality constraint so that the Lagrange multiplier method is immediately applied. The Lagrangian function is then given by where λ is the Lagrange multiplier. The solution to the constrained minimization problem must satisfy the following relations: We put to have Since S is positive definite and Hermitian, H(w) is minimized by putting The constraint (8) is rewritten as where " * " denotes the complex conjugate. The Lagrange multiplier λ can be determined by substituting (11) into (12) and then solving it for λ. However, the closed-form solution is difficult to obtain due to its nonlinearity. When the generalized singular value decomposition of R is obtained, the value of λ can be determined by finding a root of a nonlinear equation, referred to as "secular equation" [19,20]. A standard root-finding technique such as Newton's method is applicable to the solution of the nonlinear equation. Both root-finding algorithms and singular value decomposition algorithms use iterative methods, in which an iterative scheme is continued until convergence is obtained, that is, until the new value is very close to the previous value. When R changes with time as often happens, root-finding and singular value decomposition need to be performed at each time step. The iterative methods require O(N 2 ) computation time per iteration. The computational complexity increases with an increase in the number of iterations. Moreover, the use of the iterative methods at each time step is not suited for adaptive array processing where the maximum processing time is crucial. We thus derive the adaptive computation algorithm by applying second-order Taylor series approximations to the nonlinear equation. We here consider a single constraint to derive the adaptive algorithm, as shown in (5). When there are multiple (L) direction constraints, we can use a similar technique to derive the adaptive algorithm by replacing c and cc H by c 1 + · · · + c L and c 1 c H 1 + · · · + c L c H L , respectively, in (9), (10), (11), and (12). Computation of weight vector We define the N-dimensional vectors p, q, and r as and the (N × N) matrices G, V −1 , and Q 3 as Using the second-order Taylor series expansion, we approximately have Substituting (17) into (11) yields Putting the N-dimensional vectors v r , v q , and v p as the matrix Q 3 in (18) is rewritten as Therefore, we can compute Q 3 in O(N 2 ) computation time by recursive use of the matrix inversion lemma: Computation of Lagrange multiplier We define several real values as Then we have Neglecting small quantities of order ∆ 4 in (16), we approximately have Substituting (24) into (18) yields We now obtain two different ways of computing w, that is, (18) and (25). The weight vector computed by (18) is more accurate than the one by (25), because (18) is derived using only approximations (17). We thus use (18) in the computation of w and (25) in the computation of λ. Using (17), (23), and (25), we can approximately have Substituting (26) into (12) yields After some manipulation, (27) is reduced to Solving (28) for Thus we have We see that the Lagrange multiplier is expressed independently of the weight vector w. We can now obtain the closedform solution to the constrained minimization problem (4), . Summary of the proposed adaptive algorithm To follow changing interference environment, we recursively estimate R −1 by Algorithm 1: Proposed adaptive algorithm. where R t is the estimates of R at time t and µ is a forgetting factor such that µ 1. The computational complexity per sample is of order N 2 . The direct computation of (31) causes the problem of numerical stability when using a short word-length processor. The use of the numerically stable updating scheme based on the UD or square-root decomposition may be helpful. But we avoided the problem by using floating-point double precision arithmetics in the following simulation. Algorithm 1 summarizes the proposed algorithm that recursively computes the weight vector w t from the array input x t in O(N 2 ) computation time. It is here noted that p, q, r, and ϕ can be computed a priori. We can consider that the true and approximated solutions are very close to each other because (18) and (30) are derived using second-order Taylor series approximations. This will be verified through computer simulations below. COMPUTER SIMULATION We consider a desired signal with a frequency 100 MHz, a power 1, and a DOA θ d = 90 • , and an interference with a frequency 100 MHz, a power 10, and a DOA θ i = 150 • . We set h = 1, N = 4, ∆ = 0.5 • , ε = 0.02, T = 2 nanoseconds. We chose the element spacing equal to one-half wavelength, and added a white noise with mean 0 and variance 0.01(= σ 2 n ) to the array input. When the desired signal s t is coming from a direction θ, the covariance matrix of the array input is represented by Let the optimal weight vector computed off-line be w o . The array pattern with respect to θ is then represented by (33) Figure 1 shows the array pattern of the robust array. We see that the array antenna places a null in the direction of the interference, 150 • , while keeping a large antenna response to the desired direction, 90 • . The array input x t is decomposed into the sum of the desired signal component d t , the interference component i t , and the observation noise component e t . The powers of d t , i t , and e t are expressed as respectively. The signal-to-interference-plus-noise ratio (SINR) is then defined by Let the actual and prescribed DOAs of the desired signal be θ r and θ d , respectively. We put θ d = 90 • to design the constraint vector c, and computed the weight vector w for various values of θ r . Figure 2 plots the SINR as the function of θ r . The result for the conventional array computed by (3) is also shown for comparison purposes. It is found that the robust array offers a flat SINR in the look direction, although there is a tradeoff in the noise rejection capability of the processor in look directions which are far away from the desired signal. exact and approximated solutions, respectively, and P(a, b) denotes the result for ε = a and ∆ = b. The exact solution was obtained by (11) and (12), and the approximated solution was obtained by (18) and (30). We see that robustness against look-direction errors is increased as ε is smaller, while resolution capability of the desired and interference signals is decreased. Therefore, we have to make a tradeoff between robustness and resolution capability in determining the value of ε. We also see that the exact and approximated solutions are very close to each other. errors is increased as ∆ is larger, while resolution capability is decreased. Figure 5 shows the SINRs for σ 2 n = 0.01, 0.1, and 1 with ε = 0.02 and ∆ = 0.5 • , where Q(c) denotes the result for σ 2 n = c. Figure 6 shows the SINRs for N = 4, 6, and 8 with ε = 0.02, ∆ = 0.5 • , σ 2 n = 0.01, where R(d) denotes the result for N = d. We see that robustness is decreased as σ 2 n is larger or N is larger. We also see that the exact and approximated solutions are very close to each other except for the case of N = 8. We quantitatively evaluated the approximation errors of the Lagrange multiplier and the weight vector computed by the proposed algorithm. Table 1 approximated Lagrange multipliers, the squared error between the true and approximated weights, and the normalized error. The approximation is found to be very accurate. Figure 7 plots the normalized error between the true and approximated weights as the function of the angle width ∆, where Figure 7a is the result for ε = 0.01, 0.02, 0.05, Figure 7b is the result for σ 2 n = 0.01, 0.1, 1, and Figure 7c is the result for N = 4, 6, 8. It is evident that the normalized error increases with an increase of ∆. Finally, we compared the robust array trained by the proposed algorithm to the conventional array trained by the SMI algorithm in convergence performance. Figure 8 depicts the convergence trajectories of the SINR, where Figures 8a and 8b are the results for θ r = 90 • and θ r = 91 • , respectively. We used the same parameters as in Figure 2. We see from Figure 8a that both methods show almost the same performance in the absence of look-direction errors. We see from Figure 8b that the conventional method fails when there is a mismatch between the prescribed and actual DOAs, while the proposed method exhibits almost the same convergence performance due to its robustness against look-direction errors. CONCLUSION We have derived the adaptive weight computation algorithm for the robust array antenna based on the SMI technique by using second-order Taylor series approximations. The adaptive algorithm can recursively compute the weight vector in only O(N 2 ) computation time. Simulation results have shown that we have to tune parameters ∆ and ε so that a good tradeoff between robustness and resolution capability is achieved, and that robustness depends upon the array size and the SNR. The inequality constraint for the case of broadband sources was considered in [14,16]. Using the same approximation method, the result for a narrowband source will be extended to broadband sources.
3,602.2
2004-01-01T00:00:00.000
[ "Computer Science" ]
Single Nucleotide Polymorphisms and Colorectal Cancer Risk: The First Replication Study in a South American Population Colorectal cancer (CRC) heritability is determined by the complex interaction between inherited variants and environmental factors. CRC incidence rates have been increasing specially in developing countries, such as Brazil, where CRC is the third most frequent cancer in both genders. Genome‐wide association studies (GWAS), based on thousands of cases and controls typed at thousands of single nucleotide polymorphisms (SNPs), have identified several variants that associate with gastrointestinal cancer risk. Less of half of the familial risk has been elucidated through GWAS that identified common SNPs in almost exclusively European populations. Replication studies in admixed heterogeneous populations are scarce and most failed to replicate all the imputed SNPs. Population stratification by ethnic subgroups with different allele frequencies and so with differ ‐ ent patterns of linkage disequilibrium may cause expurious associations . Here, we show the first replication study of CRC inherited susceptibility in South America and aimed to identify known SNPs, which are associated with CRC risk in European populations. Epidemiology Colorectal cancer (CRC) is one of the most prevalent cancers in both genders worldwide, responsible for about 10% of all neoplasms, mainly in developed industrialized countries, such as Australia and New Zeland, North America, and Europe [1]. The detection and early removal of premalignant lesions reduces CRC mortality as confirmed by several studies that have screened populations at general risk using the fecal occult blood test. In addition, the use of flexible sigmoidoscopy for screening has shown promising results in randomized trials in the United Kingdom [5] and the United States [6], where significant reductions in both incidence and mortality were observed. Improved survival was also observed with this approach in genetically determined high-risk groups [7]. Therefore, for those determined high-risk individuals could be offered a more intensive surveillance, with colonoscopy or flexible sigmoidoscopy periodically at shorter intervals. Colonoscopy is already offered to individuals at high risk due to personal or familial history of CRC, as well as for families with Lynch syndrome and intestinal polyposis syndromes, for which more assiduous surveillance is recommended [8]. Therefore, stratifying the general population into risk categories would allow the individualization of screening and prevention strategies. Molecular pathogenesis The classic adenoma-carcinoma sequence has revealed an intricate molecular pathogenesis of CRC, where tumor suppressor genes are inactivated, and proto-oncogenes are activated through several signaling pathways, such as APC-B-cathenin, RAS-RAF, PIK3CA-PTEN, and TGF-B [9]. Three main molecular mechanisms are involved in CRC pathogenesis: chromosomal instability, microsatellite instability, and serrated polyp pathway. The first one occurs in most sporadic cancers where the accumulation of mutations, rearrangements, and aneuploidy drives malignant transformation within decades [10]. The second one occurs in about 15% of sporadic CRC and in most hereditary CRC. In sporadic CRC, an epigenetic event-hypermethylation-occurs in CpG islands of MMR gene promoters, which silences them leading to a genetic instability in microsatellite regions of genome [11]. In Lynch syndrome, mutations in MMR genes lead to microsatellite instability and accelerate adenoma-carcinoma sequence more rapidly, the reason why Lynch syndrome families develop cancer in their 40's or even earlier. The most recently serrated polyp pathway involves molecular mechanisms other than the classic adenoma-carcinoma sequence but has not yet been fully elucidated [12]. Risk factors Colorectal carcinoma is a multifactorial disease, where complex interactions between genetic and environmental factors determine individual risk. Among the latter are diets rich in red meat and animal fat and lower in fiber, smoking, alcohol consumption, obesity, sedentary lifestyle, and chronic inflammatory bowel disease [13]. In addition to age, gender, and previous history of polyps, familial history is considered the main risk factor, being the relative risk between siblings two to three times higher than in the general population [14]. Traditionally, CRC has been classified into sporadic and hereditary. The concept of familial CRC reflects one end of a risk spectrum determined by the contribution of genetic variants of susceptibility. Most are sporadic with no family history and known genetic susceptibility. Most of the CRC susceptibility genes were identified in families affected by inherited syndromes, which are caused by mutations with high penetrance. These syndromes account for about 6% of CRC cases and can be classified as syndromes with or without gastrointestinal polyposis [15]. Among the main syndromes with polyposis are familial adenomatous polyposis (FAP) caused by mutations in the APC gene; Peutz-Jeghers syndrome, attributable to mutations in the STK11 gene; Juvenile polyposis, associated with the BMPR1A gene, and Cowden's syndrome, related to the PTEN gene. Among non-polyposis syndromes, the most prevalent is Lynch's syndrome, accounting for about 3% of all the cases with CRC, caused by mutations in the mismatch repair genes during DNA replication (MLH1, MSH2, MSH6, PMS2, and EPCAM) [16]. Most of the mutations identified in familial CRC are highly penetrant, that is, with a high chance of manifesting cancer throughout the life. However, there are families with CRC clusters that do not have mutations in genes associated with hereditary syndromes. This raises the hypothesis that there are other variants or mutations with low penetrance that make certain individuals more susceptible to the CRC development. Studies with brothers with and without CRC, as well as several association studies, have identified regions in the human genome in which single nucleotide polymorphisms (SNPs) variants are associated with CRC susceptibility [17]. Up to 25% of cases are familial CRC aggregations whose heritability has been partially uncovered by GWAS SNPs [18]. However, the large proportion of familial risk remains unexplained-so-called missing heritability. Single nucleotide polymorphisms Single nucleotide polymorphisms (SNPs) are variations of the human genome, where two or occasionally three alternative nucleotides are common in the population. In most cases, an SNP has two alternative forms, termed alleles, for example, A or G at a certain position in the genome. There are 10 million SNPs estimated in the human genome, representing, along with other types of polymorphisms (such as copy number variations), about 90% of human genetic variation, including susceptibility to disease. Two individuals are 99.5% identical in their DNA sequences, and, every 1000 base pairs, there is one SNP [19]. Variants that have been deleterious during evolution are particularly rare due to natural selection. In turn, pathogenic variants that are deleterious in homozygosis may have become neutral or undergo a selection balance by conferring an advantage on asymptomatic heterozygotes. Therefore, alleles of frequent SNPs are not expected to have any significant phenotypic effect, either because natural selection would be in charge of eliminating it, -if it were detrimental by negative selection -, or fixing it, if beneficial by positive selection. Moreover, most SNPs are not located in either coding or regulatory sequences but in intergenic sequences [20]. Genome-wide association studies Searching for population associations is an attractive option to identify disease susceptibility genes. Association studies are easier to conduct than linkage studies because they do not require multiple family cases segregating the phenotype. However, they depend on the linkage disequilibrium (LD)-the nonrandom association between alleles at different loci-with a susceptibility factor, which can only be identified by markers located in the same haplotype block (the set of alleles at a linked locus in a single chromosome) close to the factor [21]. SNPs are the markers of choice for studying LD for three reasons: (1) they are sufficiently abundant that they allow verifying very short chromosome segments; (2) compared to microsatellites, they have a lower mutation rate; (3) SNPs are easily large-scale genotyped on genome [20]. The structure of the human LD was investigated by the HapMap project, and the first result was a list of more than 3 million SNPs that captured most of the common genomic variation in some populations [22]. Genome-wide association studies (GWAS) are based on the LD principle at the population level, which is usually the result of a particular ancestral haplotype common in a population. Usually, loci that are physically close exhibit a stronger DL than those that are distant in a chromosome. The genomic distance at which LD decays determines how many genetic markers are required to "tag" a haplotype block, being the number of such markers much smaller than the total of segregating variants in the population. For example, the selection of about 500,000 common SNPs in the human genome is sufficient to "tag" the common variants in non-African population, even though the total SNPs are greater than 10 million [22]. These SNPs are called tagSNPs. Although GWAS are not influenced by prior biological knowledge or genomic location of SNPs, they are influenced by LD between genotyped SNPs and non-genotyped causative variants. The strength of the statistical association between the alleles at the two loci in the genome depends, mainly, on their allelic frequencies. Thus, a rare variant-minor allele frequency (MAF) less than 0.01-will be low LD with a neighboring common variant, even though they are in the same recombination range. However, most of the SNPs selected in the SNP arrays are common (MAF higher than 0.05), and therefore, GWASs have the power to detect association of variants that are relatively common in the population [21]. On the other hand, it is suggested that the observed association between a common SNP and a complex trait may result from LD of the SNP with rare variants at the same locus. Since common alleles and causal rare variants are correlated in a low LD, the hypothesis of a "synthetic association" implies that the magnitude of the effect of the causative variants is much greater than that of the common genotyped SNPs by the GWAS. For example, if an SNP explains 0.1% of the phenotypic variance in the population, the causal variant would account for 5-10% [23]. GWAS and CRC Most of the studies to identify low penetrance alleles for CRC susceptibility were based on a candidate gene approach, whose role in CRC pathogenesis was supposedly known. However, without the real understanding of the biology of predisposition, the choice of genes was problematic. Thus, until the advent of GWAS, few or no association studies based on this approach were able to identify alleles of susceptibility unequivocally associated with the CRC risk [17]. The number of common variants contributing with more than 1% of the inherited risk is very low, and it is very unlikely that there will be other SNPs with similar effects (greater than 1.2) for alleles with frequencies greater than 20% in European populations. In fact, the GWAS identified on average 80% of the common SNPs in this population but only 12% of SNPs with a minor allele frequency (MAF) between 5 and 10% [17]. However, variants with this profile, if taken collectively, can confer substantial risks due to their multiplicity, and in the case of CRC, to date, explain about 10% of heritability [33]. In a model built on data from the Scottish GWAS, about 170 common independent variants would explain all the genetic variance of the CRC [35]. Therefore, most of the genetic susceptibility to CRC still needs to be defined the so-called "missing heritability". There are other possible causes of this unidentified portion: (1) the effect of rare variants; (2) failure to identify causal variants; and (3) allelic heterogeneity [36]. GWAS strategies to identify modest common risk alleles are not ideal for identifying rare variants (MAF below 1%) with potentially greater effects, as well as for capturing copy number variants and other structural variants, such as insertions, complex rearrangements, or expansions of microsatellite repeats, which may alter the risk of CRC. As efforts are made to scale up the GWAS meta-analyses, both in terms of sample size and coverage of SNPs, as well as to increase the number of SNPs considered for large-scale replication, it will be feasible to discover new variants. It is possible that a multiple loci approach based on haplotype markers identifies rare alleles. In addition, the use of exome sequencing may provide a more effective strategy for finding such variants [37]. Objectives The overall objective of the present study was to replicate in individuals of the Brazilian population the 10 SNPs associated with CRC risk that are previously described in European populations. The specific objectives were to (1) calculate the allelic and genotype frequencies of the 10 SNPs in cases and controls; (2) analyze the association between the genotypes and alleles of the 10 SNPs and CRC risk; (3) calculate the magnitude of the effect on CRC risk; and (4) correlate the genotypes of the 10 SNPs with clinical-pathological characteristics and with familial history. Patient selection criteria This is a retrospective study of case-control genetic association, whose sample comprised 727 cases and 740 controls, recruited from the Departments of Pelvic Surgery, Clinical Oncology, and Community Medicine at AC Camargo Cancer Center, in São Paulo, Brazil. All patients and controls authorized the present study by signing the informed consent form previously approved by the Research Ethics Committee of the institution under number 1231/09. The inclusion criteria for cases were CRC diagnosis before age 75 years or with advanced colorectal adenoma (villous histology and/or greater than 1 cm and/or severe dysplasia) diagnosis before age 60 years, and controls were individuals without CRC who did not have firstor second-degree relatives with CRC. Controls were not matched with the cases in relation to the socioeconomic condition, ancestry, or self-referred ethnicity. The exclusion criteria were the presence of hereditary syndromes of predisposition to CRC, immunohistochemistry tests showing absence of proteins from DNA mismatch repair genes, the presence of high-penetrance germline mutations in susceptible genes to CRC, and appendix tumors and/or previous chronic inflammatory bowel disease. Statistics analysis All tests were corrected for multiple analyses to avoid type I error. The allelic and genotypic frequencies of each SNP were calculated using the DeFinetti program [38], and the deviations of the genotype frequencies in cases and controls predicted by the Hardy-Weinberg equilibrium were calculated by the chi-square test with one degree of freedom or by Fisher's exact test, if the expected cell count was less than five. Association analyses between the genotypes found in cases and controls for each SNP were performed with several types of genetic models, using the SNP and Variation Suite Version 7.6.10 program [39]. Multiple analyses were corrected by false discovery rate and Bonferroni. Clinical characteristics of cases and controls Of the 727 cases included in this study, 51% were male, with a median age of diagnosis of 56.9 ± 10.1 SD years old; 30% fulfilled Bethesda criteria; 3% of tumors were high-risk adenomas; the most common site of CRC was the rectum and in about 10%, there was an extra-colonic second primary tumor; tubular adenocarcinomas moderately differentiated at clinical stage III was mostly diagnosed. The majority of patients was alive disease-free until data collection and about 30% of cases had no familial history of CRC, despite almost 20% did not know about affected relatives. Of the 740 controls included in this study, 52% were female, with a median age of 51.9 ± 12.3 SD years old. Cases and controls were age and sex matched (p = 0.126 and 0.193, respectively). SNP genotyping and association tests The genotypic frequencies of each SNP in cases and controls and their p-values are depicted in the following graphics: The allelic frequencies, the number of alleles, the genotyping rate, and the Hardy-Weinberg equilibrium (EHW) test are represented in Table 1. The genetic association tests and their genetic models are shown in Table 2. Of the 10 SNPs, 5 (06, 09 16, 82, and 83) were statistically significant (p ≤ 0.05) associated with the risk of CRC and 2 (26 and 71) showed a trend to association (p < 0.1). SNP 06 showed the more significant association among all SNPs in all genetic models. SNP 09 was the only predictor of low risk, mainly in the dominant model (25% lower), whereas SNPs 16 and 82 were associated with high risk in the recessive model (45 and 85% higher, respectively). SNP 83 had higher risk principally in the dominant model (almost 50%). SNPs 26 and 71, on the other hand, obtained a marginally significant association only in the allelic and additive models, with a trend toward a higher risk with SNP 71 and lower risk with SNP 26. To sum up, of five SNPs associated with CRC risk, two (SNP 16 and 82) conferred higher risk among rare homozygotes than among heterozygotes and common homozygotes together through the recessive model and three (06, 09, and 83) showed higher risk among heterozygotes and rare homozygotes together than among common homozygotes through the dominant model. Table 3 shows the five SNPs associated with CCR risk with their respective wild and variant alleles, their risk allele frequencies in comparison with the European population, the effect size of the variant allele and the their populational attributable risks, which is the incidence decrease of disease if the population was not exposed to the risk allele. Discussion In common diseases, such as CRC, it estimated that the most part of its genetic risk is due to the inherited multiple loci following polygenic model, each one with a common allelic frequency (MAF greater than 5%), whose effects show small sizes, between odds ratios 1.0 and 1.5. [17] Thus, to detect those small effects, it is necessary a big sample size. This strategy was validated by metanalyses of GWAS data from European populations with tens of thousands genotyped individuals through high throughput platforms, followed by validation by multiple phases with independent series of cases and controls. Even though only about 20 common SNPs with modest effects were identified so far, each one with a p value corrected by multiple tests (<5.0 × 10 −8 ). In Table 4, GWAS data from European populations are compared to this study. ** (RAF(OR -1))/(1 + RAF(OR -1)). RAF, risk allele frequency; PAR, populational attributable risk. In this study, there was an association with CRC risk in half of SNPs (06, 09 16, 82 and 83), whose risk alleles revealed similar frequencies as to European GWAS, except SNP 06. Effect sizes were modest as well as European GWAS. SNP 06 was the variant that resulted the greatest effect with the most statistically significant association (p trend = 3.49 × 10 −5 ), which conferred the highest populational risk, whereas in European GWAS, the risk augmented up to 23%, representing 8.6% of the populational risk [35]. In the original study, the same SNP also showed the greatest association (p trend = 1.0 × 10 −12 ) [27]. SNP 09 (rs10411210) was associated with a low risk in a dose-dependent way [31]. In this study, however, this effect was detected only in the dominant model. It is noteworthy that in European studies the major allele (C) confers 15% higher risk, responsible for 12% of the populational risk [35]. In the present study, there was a trend to a higher risk but not statistically significant (p = 0.08). Moreover, SNP 16 was associated with a higher risk in this study than from European GWAS, as well as the populational attributable risk [35]. Likewise, SNPs 82 and 83 also increased more the CRC risk and populational risk in the present study than from European GWAS [35]. In the present study, the populational stratification by ancestry was not investigated. However, the Brazilian population, although greatly admixed, has a high prevalence of individuals from European ancestry, whose the great majority is located in the South (79.5%) and Southeast (74.2%) [40]. Conclusion This study partially replicated European GWAS in Brazilian Southeastern population with a predominantly European genetic background. Small sample size and lack of stratification by ancestry are prone to type I and II errors, respectively. Further studies in admixed populations would certainly aid to uncover the missing heritability of CRC and help to build the genetic architecture of CRC susceptibility.
4,589.2
2017-09-06T00:00:00.000
[ "Medicine", "Biology" ]
LORIA System for the WMT13 Quality Estimation Shared Task In this paper we present the system we submitted to the WMT13 shared task on Quality Estimation. We participated to the Task 1.1. Each translated sentence is given a score between 0 and 1. The score is obtained by using several numerical or boolean features calculated according to the source and target sentences. We perform a linear regression of the feature space against scores in the range [0..1], to this end, we use a Support Vector Machine with 66 features. In this paper, we propose to increase the size of the training corpus. For that, we decide to use the post-edited and reference corpora in the training step after assigning a score to each sentence of these corpora. Then, we tune these scores on a development corpus. This leads to an improvement of 10.5% on the development corpus, in terms of Mean Average Error, but achieves only a sligth improvement on the test corpus. Introduction In the scope of Machine Translation (MT), Quality Estimation (QE) is the task consisting to evaluate the translation quality of a sentence or a document. This process may be useful for post-editors to decide or not to revise a sentence produced by a MT system (Specia, 2011;Specia et al., 2010). Moreover, it can be useful to decide if a translated document can be broadcasted or not (Soricut and Echihabi, 2010). The most obvious way to give a score to a translated sentence consists in using a machine learning approach. This approach is supervised: experts are asked to score translated sentences and with the obtained material, one learns a prediction model of scores. The main drawback of the machine learning approach is that it is supervised and requires huge data. To score a sentence is time-consuming. Moreau et al. in (Moreau and Vogel, 2012) dealt with this issue by proposing unsupervised similarity measures. In fact, the score of a translated sentence is defined by a measure giving the distance between it and the contents of an external corpus. The authors improve the results of the supervised approach but this method can be used only in the ranking task. Raybaud et al. (Raybaud et al., 2011) proposed a method to add errors in reference sentences (deletion, substitution, insertion). By this way, they build additional corpus in which each word can be associated with a label correct/not correct. But, it is not possible to predict the translation quality of sentences including these erroneous words. In this paper, we propose to increase the size of the training corpus. For that, we use the score given by experts to evaluate additional sentences from the post-edited and reference corpora. Practically, we extract from source and target sentences numerical vectors (features) and we learn a prediction model of the scores. Then, we apply this model to predict the scores of the post-edited and the reference sentences. And finally, we tune the predicted scores on a development corpus. The article is structured as follows. In Section 2, we give an overview of our machine learning approach and of the features we use. Then, in Sections 3 and 4 we describe the corpora and how we increase the size of the training corpus by a partlyunsupervised approach. In section 5, we give results about this method and we end by a conclusion and perspectives. Overview of our quality estimation submission We submit a system for the task 1.1: one has to evaluate each translated sentence with a score between 0 and 1. This score is read as the HTER between the translated sentence and its post-edited version. Each translated sentence is assigned a score between 0 and 1. The score is calculated using several numerical or boolean features extracted according to the source and target sentences. We perform a regression of the feature space against [0..1]. To this end, we use the Support Vector Machine algorithm (LibSVM toolkit (Chang and Lin, 2011)). We experimented only the linear kernel because our experience from last year (Langlois et al., 2012) showed that its performance are yet good while no parameters have to be tuned on a development corpus. The baseline features The QE shared task organizers provided a baseline system including the same features as last year: source and target sentences lengths; average source word length; source and target likelihood computed with 3-gram (source) and 5-gram (target) language models; average number of occurrences of the words within the target sentence; average number of translations per source word in the sentence, using IBM1 translation table (only translations higher than 0.2); weighted average number of translations per source word in the sentence (similar to the previous one, but a frequent word is given a low weight in the averaging); distribution by frequencies of the source n-gram into the quartiles; match between punctuation in source and target. Overall, the baseline system proposes 17 features. We remark that only 5 features take into account the target sentence. The LORIA features In previous works (Raybaud et al., 2011;Langlois et al., 2012), we tested several confidence measures. As last year (Langlois et al., 2012), we use the same features. We extract information by the way of language model (perplexity, level of back-off, intra-lingual triggers) and translation table (IBM1 table, inter-lingual triggers). The features are defined at word level, and the features at sentence level are computed by averaging over each word in the sentence. In our system, we use, in addition to baseline features, ratio of source and target lengths; source and target likelihood computed with 5-gram language models (Duchateau et al., 2002) (in addition to 3-gram features from baseline); level of backoff n-gram based features (Uhrik and Ward, 1997). This feature indicates if the 3-gram, the 2-gram or the unigram corresponding to the word is in the language model. For likelihoods and levels of backoff, we use models trained on corpus read from left to right (classical way), and from right to left (sentences are reversed before training language models). This leads to two language models, and therefore to two values for each feature and side (source and target). Moreover, a common property of all n-gram and backoff based features is that a word can get a low score if it is actually correct but its neighbours are wrong. To compensate for this phenomenon we took into account the average score of the neighbours of the word being considered. More precisely, for every relevant feature x . defined at word level we also computed: The other features are intra-lingual features: each word is assigned its average mutual information with the other words in the sentence; interlingual features: each word in target sentence is assigned its average mutual information with the words in source sentence; IBM1 features: contrary to IBM1 based baseline features which take into account the number of translations, we use the probability values in the translation table between source and target words; basic parser (correction of bracketing, presence of end-of-sentence symbol); number and ratio of out-of-vocabulary words in source and target sentences. This leads to 49 features. A few ones are equivalent to or are strongly correlated to baseline ones. We remark that 27 features take into account the target sentence. The union of the both sets baseline+loria improved slightly the baseline system on the test set provided by the QE Shared Task 2012 (Callison-Burch et al., 2012). Corpora The organizers provide a set of files for training and development. We list below the ones we used: • source.eng: 2,254 source sentences taken from three WMT data sets (English): news-test2009, news-test2010, and news-test2012. In the following, this file is named src • target system.spa: translations for the source sentences (Spanish) generated by a PB-SMT system built using Moses. In the following, this file is named syst • target system.HTER official-score: HTER scores between MT and post-edited version, to be used as the official score in the shared task. In the following, this file is named hteroff • target reference.spa: reference translation (Spanish) for source sentences as originally given by WMT; In the following, this file is named ref • target postedited.spa: human post-edited version (Spanish) of the machine translations in target system.spa. In the following, this file is named post We split these files into two parts: a training part made up of the 1,832 first sentences, and a development part made up of the 442 remaining sentences. This choice is motivated by the fact that in the previous evaluation campaign we had exactly the same experimental conditions. For each given file f, we use therefore a part named f.train for training and a part named f.dev for development. Training Algorithm This section describes the approach we propose to increase the size of the training corpus. We have to train the prediction model of scores from the source and target sentences. The common way to train such a prediction model consists in extracting a features vector for each couple (source,target) from the (src.train,syst.train) corpus. For each vector, the score associated by experts to the corresponding sentence is assigned. Then, we use a machine learning approach to learn the regression between the vectors and the scores. And finally, we use the triplet (src.dev,syst.dev,hteroff.dev) to tune parameters. With machine learning approach, the number of examples is crucial for a relevant training, but unfortunately the evaluation campaign provides a training corpus of only 1,832 examples. To increase the training corpus, we propose to use the ref and post files. But for that, we have to associate a score to these new target sentences. One way could be to calculate the HTER score between each sentence and its corresponding sentence in the post edited file. But this leads to a drawback: all the couples (src,post) would have a score equal to 0, and then there is a risk of overtraining on the 0 value. To prevent this problem, we preferred to learn a prediction model from the (src.train,syst.train,hteroff.train) triplet. Then we apply this prediction model to the (src.train,post.train) and to the (src.train,ref.train). By this way, we get a training corpus made up of 1, 832 × 3 = 3, 696 examples with their scores. Consequently, it is possible to learn a prediction model from this new training corpus. These scores are not optimal because the features cannot describe all the information from sentences, and a machine learning approach is limited if data are not sufficiently huge. Therefore, we propose an anytime randomized algorithm to tune the reference and post-edited scores on the development corpus. We give below the algorithm we propose. To evaluate a model, we use it to predict the scores on the development corpus. Then we compare the predicted scores to the expert scores and we compute the Mean Average Error (MAE) given by the formula M AE(s, r) = n i=1 |s i −r i | n × 100 where s and r are two sets of n scores. Results We used the data provided by the shared task on QE, without additional corpus. This data is composed of a parallel English-Spanish training corpus. This corpus is made of the concatenation of europarl-v5 and news-commentary10 corpora (from WMT-2010), followed by tokenization, cleaning (sentences with more than 80 tokens removed) and truecasing. It has been used for baseline models provided in the baseline package by the shared task organizers. We used the same training corpus to train additional language models (5-gram with kneyser-ney discounting, obtained with the SRILM toolkit) and triggers required for our features. For feature extraction, we used the files provided by the organizers: 2,254 source english sentences, their translations by the baseline system, and the score of these translations. This score is the HTER between the proposed translation and the post-edited sentence. We used the train part to perform the regression between the features and the scores. Therefore, the system we propose in this campaign is the same as the one we presented for the previous campaign in terms of features. But, we only use a SVM with a linear kernel and we do not use any feature selection. The added value of the new system is the fact that we increase the size of the training corpus. To evaluate the different configurations, we used the MAE measure. The performance of our system with only the classical train set (src.train,syst.train) are given in Ta First, we use the system trained on (src.train,syst.train) to predict scores for the sentences in post.train and ref.train. We know that these scores should represent the HTER score, then a well translated sentence should be assigned a higher score. Therefore, we can make the hypothesis that sentences from post.train and ref.train are better than those in syst.train. We check this hypothesis by comparing the distributions of HTER scores in the three files (true HTER scores in syst.train, and predicted scores in the two other files). We present in Table 2 the Minimum, Maximum, Mean and Standard Deviation of this score for the three corpora. We remark that the scores are not well predicted because some of them are negative while all scores in syst.train are between 0 and 1. This is due to the fact that the constraint of HTER in terms of limit values is not explicitly taken into account by SVM. We give more details about these scores out of [0..1] in Table 3. For post.train, 2 scores are under 0 with a mean value equal to -0.123, and no scores are higher than 1. For ref.train, 4 scores are under 0 with a mean value equal to -3.023, and 26 scores are higher than 1 with a mean equal to 1.126. Comparing to the 1,832 sentences in the training corpus, we can conclude that the 'outliers' are very rare. In Table 2 Mean and Standard Deviation are computed only for scores predicted between 0 and 1. The obtained mean values are quite similar, but the standard deviation is very low for predicted scores. This configuration leads to a performance equal to 13.88 on the development corpus, which is slightly worse than the BASELINE system but slightly better than the BASELINE+LORIA system. Because, SVM predicts scores which do not represent exactly HTER and because the model is learnt on a relatively small corpus (1,832 sentences), we decided to modify randomly some scores. This operation is called in the following the tuning process. Set Min Max Mean SD syst. train -11.314 0.746 0.329 0.081 For the tuning process, after several tests, we fixed to 0.1 the probability pdisturb to modify the score of a sentence. Then, the score is modified by randomly shifting it in [−0.01... + 0.01]. We start with the initial predicted scores (MAE = 13.88). Then we randomly modify a subset of scores and keep a new configuration if its MAE is improved. The process is stopped when MAE converges. Figure 1 presents the evolution of MAE on the development corpus. The process stopped after 22, 248 iterations. Only 274 (1.2%) iterations led to an improvement. We present the results of this approach on the development corpus and on the official test set of the Table 4 the results on development and test corpus for the BASELINE features and the BASELINE+LORIA features with and without using the post-edited and reference sentences. Finally, we achieve a MAE of 12.05 on the development set. This constitutes an improvement of 10.5% in comparison to the BASELINE system. But we improve only slightly the performance of the baseline system on the test set. We conclude that there is an overtraining on the development corpus. In order to prevent from this problem, we could use a leaving-one-out approach on training and development corpora. With the tuned values of scores, we calculated the same statistics as in Tables 2 and 3. We present these statistics in Tables 5 and 6. As we can see, the tuning process leads to an increasing of the mean value of the scores. Moreover, the number of scores out of range increases. This analysis reinforces our conclusion about overtraining: predicted scores may be strongly modified to obtain a good performance on the development corpus. .83 on the test corpus, which is worse than the performance without correction. This is for us a drawback of the machine learning approach. For this approach, the scores have no semantic. SVM do not "know" that the scores are HTER between 0 and 1. Then, if tuning leads to no reasonable values, this is not a problem if it increases the performance. Moreover, maybe the features do not extract from all sentences information representative of their quality, and this quality is overestimated: then the tuning system has to lower strongly the corresponding scores to counteract this problem. Conclusion and perpespectives In this paper we propose a method to increase the size of the training corpus for QE in the scope of Task 1.1. We add to the initial training corpus (sentences translated by a machine translation system) the post-edited and the reference sentences. We associate to these sentences scores predicted by using a model learnt on the system sentences. Then we tune the predicted scores on the development corpus. This method leads to an improvement of 10.5% on the development corpus in terms of MAE, but achieves only a slight improvement on the test corpus. A statistical study shows that tuning scores leads to out of range values. This surprising behavior have to be investigated. In addition, we will test another machine learning tools (neural networks for example). Another point is that, contrary to last year, the whole set of features leads to worse performance than baseline features. This could be explained by the fact that no selecting algorithm has been used to choose the best features. In fact, we preferred, this year to investigate the underlying knowledge on the post-edited and reference corpora. Last, we conclude that the good improvement on the development corpus is not reproduced on the test corpus. In order to prevent from this problem, we will use a leaving-one-out approach on the training.
4,201.4
2013-08-08T00:00:00.000
[ "Computer Science" ]
Exploring the vulnerability in the inference phase of advanced persistent threats In recent years, the Internet of Things has been widely used in modern life. Advanced persistent threats are long-term network attacks on specific targets with attackers using advanced attack methods. The Internet of Things targets have also been threatened by advanced persistent threats with the widespread application of Internet of Things. The Internet of Things device such as sensors is weaker than host in security. In the field of advanced persistent threat detection, most works used machine learning methods whether host-based detection or network-based detection. However, models using machine learning methods lack robustness because it can be attacked easily by adversarial examples. In this article, we summarize the characteristics of advanced persistent threats traffic and propose the algorithm to make adversarial examples for the advanced persistent threat detection model. We first train advanced persistent threat detection models using different machine learning methods, among which the highest F1-score is 0.9791. Then, we use the algorithm proposed to grey-box attack one of models and the detection success rate of the model drop from 98.52% to 1.47%. We prove that advanced persistent threats adversarial examples are transitive and we successfully black-box attack other models according to this. The detection success rate of the attacked model with the best attacked effect dropped from 98.66% to 0.13%. Introduction Advanced persistent threats (APTs) are long-term network attacks on specific targets with attackers using advanced attack methods. APTs are more advanced than other forms of attacks. APT is advanced because it uses advanced attack tools and methods. Before an APT attack, the attacker will collect as accurately as possible the business process and target system of the attacked object. In addition, APT attacks are far more difficult to detect than traditional non-targeted attacks because APT attacks are targeted, highly concealed, exploit vulnerabilities and do not aim at directly obtaining economic benefits. 1 At first, APT attackers mainly attacked the state and government departments. In recent years, they have begun to attack the private and corporate sectors. 2 APT attackers also have turned their attack targets from traditional targets to Internet of Things (IoT) targets. IoT threats continued at a rapid pace and APT attackers successfully used timeworn strategies to gain access to vulnerable connected devices. 3 As exposed by Drovorub, APT28 hacked IoT devices such as the video decoder and the printer. 4 Therefore, how to detect and respond to APT attacks has become increasingly important for the cyber security and the IoT security. The existing APT detection methods are divided into host-based detection methods and network-based detection methods. The host-based detection methods are mainly to detect whether there are malicious behaviours on independent hosts such as the execution of malicious software, the behaviour of applications trying to modify certain files. Bian et al. 5 extracted graphbased features from authentication logs of the target host during the APT lateral movement stage and then used these features to train the machine learning model to detect APT. Bai et al. 6 used machine learning methods to detect abnormal behaviours of Remote Desktop Protocol (RDP) event logs during the APT lateral movement stage to detect APT. Yan et al. proposed an AUID framework, which extracts three major categories of features: host-based features, domain-based features and time-based features. Then it uses the Kmeans algorithm to detect APT. 7 Ghafir et al. 8 proposed an APT detection system based on learning, MLAPT, which mainly uses threat detection module, alert correlation module and attack prediction module to detect APT. The network-based detection methods usually take network flow data as input and aim to find abnormal network packets and abnormal network interactions through statistical analysis, data mining or machine learning. 9 Zhao et al. 10 deployed at the network exit point to extract domain name system (DNS)related features, then performed APT detection based on signature, abnormal behaviour and the features extracted. Niu et al. 11 extracted DNS-related features from the phone's DNS logs to detect APT. Schindler 12 used the graph method and mapped the time series to the kill chain model through a multi-layer structure, then used machine learning methods to detect abnormal behaviours by learning normal behaviours. Both host-based detection methods and networkbased detection methods have adopted machine learning methods mostly. However, machine learning lacks robustness because it is easily attacked by adversarial examples. The adversarial example is to add some small but intentional perturbations to the input sample, but it can cause the machine learning model to output a wrong classification with high confidence. Previous studies believed that adversarial examples work because of the high non-linearity and over-fitting of the machine learning model; Goodfellow et al. first proposed that the appearance of adversarial examples work precisely due to the machine learning model's high latitude and high linearization. Relying on this assumption, Goodfellow et al. 13 proposed the fast gradient sign method (FGSM) to generate adversarial examples and obtained a panda image with a 99.3% confidence of 'gibbons'. Kurakin et al. 14 16 In the field of detecting malware, Grosse et al. 17 successfully misled the malware detection model by modifying the adversarial examples crafting algorithm for image classification. Hu and Tan 18 used GAN to carry out the black-box attack on the malware detection system and successfully bypassed the system. It is popular to use machine learning methods to detect APT, but machine learning has proven to be vulnerable to adversarial attacks in many fields. Although it has not been proven that APT detection systems based on machine learning are vulnerable to be attacked by adversarial example, we believe that adversarial attacks against the APT detection model would have occurred or will occur in the future because the nature of machine learning methods are vulnerable to adversarial attacks. And because APT attacks are more advanced in attack tools and methods than other network attacks, APT attackers are likely to use the characteristics of machine learning models to mislead or even paralyse our APT detection models based on machine learning methods. To verify that machine learning methods used in the APT field are also vulnerable to attacks from adversarial examples, this article first extracts network-based features from the acquired network flow data and trains APT detection models through these features to detect APT attacks by detecting abnormal networks. Then we use the algorithm proposed to grey-box attack and black-box attack APT detection models. In summary, we make the following contributions. 1. We train APT detection models based on network traffic features and prove that the model can detect APT attacks effectively. 2. We summarize the characteristics of APT traffic and propose the adversarial examples generation algorithm in the field of APT based on these characteristics. Then, we use this algorithm to grey-box attack APT detection models. 3. We prove that the emergence of APT adversarial examples is due to the high linearity of machine learning model. In addition, we also prove that APT adversarial examples are transitive and use this characteristic to black-box attack APT detection models. APT The life cycle of APT can be divided into the following stages: reconnaissance, delivery, initial intrusion, command and control (C&C), lateral movement, data exfiltration. 19 In the reconnaissance and delivery stage, attackers mainly collect information about the target, such as exploits, personnel information and host information. Then attackers use collected information to attack the target in the initial intrusion stage. In the C&C stage, attackers use the C&C server to control compromised hosts. In the lateral movement stage, attackers go through compromised hosts to move inside network and expand control. In this stage, attackers can control new compromised hosts and these hosts can also be used to infect more hosts of inside network. Finally, attackers can steal sensitive data from the target. Compared with traditional cyber intrusions, APT has the following characteristics: (a) advancedattackers use advanced attack tools and methods, 0 day vulnerabilities are often used in APT attacks, but traditional attacks are rarely used; (b) targeted -attackers have clear targets and collect information about the target from the beginning, but traditional cyber-attacks often have no clear targets; (c) highly concealedattackers will stay as concealed as possible in the host to obtain confidential information for a long time, 1 APT attackers often stay in the host for hundreds of days. Due to these characteristics, APTs are difficult to be detected by traditional detection techniques, such as intrusion detection technology, vulnerability detection technology and malicious code detection technology. 20 IoT have been widely used in modern life. IoT refer to the system composed of interconnected and interrelated devices, objects and sensors. 21 IoT targets have also been threatened by APTs with the widespread application of IoT because the IoT device such as the sensors is weaker than host in security. How to detect and respond to APT attacks has become increasingly important for the IoT security since IoT devices are inherently risky and easy to exploit while being heavily exposed to the Internet. 22 As exposed by Drovorub, APT28 hacked at least 500,000 IoT devices, such as routers, video decoders and printers. 4 APT28 attackers usually use phishing emails to carry out attacks and then use botnets to control IoT devices. In an intrusion activity, APT28 attackers invaded three IoT devices: video decoders, VOIP phones and printers. In the initial intrusion stage and the C&C stage, attackers took control of the printer by exploiting the vulnerability and controlled the video decoder and the VOIP phone by using the default password. In the lateral movement stage, attackers used compromised IoT devices to perform intranet penetration. In addition, attackers can steal sensitive data from the target in the end. Adversarial attacks Machine learning has a wide range of applications in many fields. Akinyelu and Adewumi 23 used the random forest method to propose a content-based phishing detection model, which can detect extremely harmful phishing emails from normal emails. Girshick et al. combined the region proposal with convolutional neural networks (CNNs) to propose R-CNN, which achieved target detection. This method first extracted about 2000 bottom-up regional suggestions from the input image, then used CNN to calculate the feature value of each suggestion. Finally, the support vector machine (SVM) method used the extracted feature values to classify each region. 24 As the number of layers of a neural network increases, its error rate will also increase. To solve this problem, He et al. 25 proposed a residual learning framework based on deep convolutional neural networks and improved the classification accuracy on the ImageNet dataset. Simonyan and Zisserman 26 proposed that a convolutional neural network with deep layers can be used to classify images with a large classification range. Machine learning classifiers have also been widely used to detect malware. The effectiveness of malware detectors will decrease as malware evolves, Nataraj et al. 27 innovatively mapped the malware binary byte file into a gray-scale image and then detected the malware through image classification methods. Raff et al. 28 proposed a malware detection model based on convolutional neural networks, which mainly used neural networks to test the binary code of the entire file. Alsulami et al. proposed a neural network model based on convolution and long short-term memory (LSTM) to detect the behaviour of software. This model extracted features from Windows pre-reading files to realize malware detection. 29 With the widespread application of machine learning in various fields, the security issues of machine learning cannot be underestimated. The methods of attacking machine learning are mainly divided into attacks in the training phase and attacks in the where F represents our APT detection models, x represents the original input example, x represents the adversarial example obtained by modifying x. The process of misleading the model by generating adversarial examples is called an adversarial attack. The main problem of adversarial attacks is how to modify the original example to obtain the adversarial example. Adversarial attacks can be divided into white-box attacks and black-box attacks according to the attacker's knowledge of the machine learning model. The attacker of white-box attacks not only knows the structure adopted by the attacked model, but also knows parameters of the model. 13,14,17 The FGSM is a typical whitebox attack algorithm. FGSM obtains the gradient of the current input example for the model and then modifies the input example according to the gradient to obtain the adversarial example. Goodfellow et al. 13 Dataset The APT detection model generated in this article based on network and the model detects abnormal networkrelated information through statistical analysis of network data streams to detect APT. The benign traffic used for training the APT detection model comes from TcprePlay, which is real network traffic captured on busy private network access points. The size of normal traffic is 368 MB including 40,686 network stream data and 791,615 data packets generated by 132 applications. 32 The APT traffic used for APT detection model is the real APT example that comes from Contagio malware database, which contains 5732 data packets. 33 The same APT traffic is used by many related works. 34,35 The APT traffic includes 36 datasets consisting of 29 APT samples. In our dataset, the APT traffic accounts for about 1% of the normal traffic, which is consistent with that in real life. The low proportion makes it difficult to detect APT. Training the APT detection model There are six steps in the process of training the APT detection model, which is shown in Figure 1. In the following sections, we will introduce every step on how to test the model. Raw data analysis. In the raw data analysis stage, we first used Wireshark to analyse the pcap file of the original raw data. The pcap file in the data set is the network packet and Wireshark are very popular network analysis software. Wireshark can intercept many kinds of network packets and it can also display the detailed information of the network packet obtained, such as source IP address, destination IP address, length and the protocol. In this section, we use Wireshark to obtain detailed information of the original data, which can make preparation for subsequent statistics and feature extraction. Statistics and feature extraction. At the step of statistics and feature extraction, we use the scapy program to perform statistics and feature extraction on the original data based on the results of the previous analysis. The scapy program is a very powerful network data packet processing tool. It can forge data packets, decode data packets, capture data packets and send data packets through the network. The selection of extracted features is not only based on the previous analysis of the original data, but also based on the analysis of the existing network-based APT detection literature. 10,28,34,36 Features we propose are divided into data stream-level features and packetlevel features. The features extracted in this article for detecting APT are as follows. 1. Source port and destination port: To pass the firewall, APT attackers will use the C&C communication protocol and port allowed by the firewall. 10 By analysing the traffic, we found that destination ports used by APT traffic are mostly popular port such as 80, while ports used by normal traffic are mostly dynamic. 2. Protocol: There may be a mismatch between the port and the protocol, because the protocol used by APT attackers is implemented at the encoding stage and the port is configured when locating the C&C server. 10 3. The duration of stream: By comparing and analysing normal traffic and APT traffic, we found that the network flow duration of APT traffic is almost all greater than 0.1 s (99.3%), while the network flow duration of normal traffic is mostly less than 0.1 s (64.9%). The reason why the flow duration of APT traffic is greater than normal traffic is because APT traffic needs to take time to evade the defence system. 4. The number of data packets and traffic bits: To hide themselves, APT attackers will generate fewer packets and bits than normal traffic. 28 5. The number of bits in a data packet: The average packet size of APT traffic is much smaller than normal traffic, 34 because the goal of normal traffic is to transmit information and other normal behaviours, while APT traffic is to invade the system. 6. The number of data packets per unit time: To remain concealed and maintain interaction with the attacked system, APT attackers will generate a small amount of data packets for a long time. 36 However, normal traffic will transmit a large number of data packets in a short period of time. The time interval of upstream data packets: There is almost no time interval between upstream data packets in normal traffic, while there is time interval between upstream data packets of APT traffic. 34 The time interval of downstream data packets: Due to the time interval between the upstream data packets in APT traffic, there is also a time interval between the downstream data packets. It also will be slightly larger than the time interval of the upstream data packets in APT traffic. 34 9. The ratio of upstream to downstream: In normal traffic, the host upload traffic to the network is greater than the download traffic from the network. In APT traffic, the infected host will send host information to the attacker for further attacks instructions, so the upstream traffic will be larger than the downstream traffic. 10 10. The ratio of upstream data packets to downstream data packets: In APT traffic, the upstream data packet will be significantly larger than the downstream data packet. 34 11. Stream bits per unit time: The APT attacker will keep low traffic through the firewall, so the stream bit per unit time of the APT traffic will be less than the normal traffic. 34 Comparing and analysing the normal traffic and the APT traffic can also prove it. Split dataset. Before splitting the dataset, we need to perform one-hot encoding on the protocol, because it is a categorical variable. We also add the feature of label because the training process of the APT detection model belongs to supervised learning. We mark the APT traffic (positive examples) as 1 and mark the normal traffic (negative examples) as 0. In this step, we divide the dataset into the training set and the testing set. Then we randomly divided the data set into the training set and the test set according to the ratio of 80% and 20%. Standardization. The effect of model will be affected by different value ranges and dimensions in different features and the existence of singular examples. Therefore, extracted features must be standardized before training the APT detection model. The standardized data will be in the same order of magnitude, that is, have the same value range. The standardized process is as shown in Equation (2) where x represents the data before standardization and x à means the standardized data, m and s are the average and standard deviation of the training set of the current feature, respectively. After standardization, the average value of feature is 0 and the standard deviation is 1. It is worth noting that before using the test set, it should also be standardized with same parameters. Training the classifier model. After standardization, we train four different APT detection models based on different machine learning algorithms. The data set used to train our models is the training set previously divided in the split dataset step. These models can successfully detect APT traffic from normal traffic. We select the knearest neighbour (KNN) algorithm, the random forest algorithm, the logistic regression algorithm and the SVM as the algorithm for training our models. The model we selected includes linear models and non-linear models, which can measure the adversarial example generation algorithm comprehensively. The algorithms we choose to train our model are based on related works. 34,37 According to our evaluation in the section 'Experimental results and discussion', these models are able to detect APT very effectively. Testing the APT detection model. This step is used to test whether our model can detect the APT traffic. The data set used to test our models is the testing set previously divided in the split dataset step. The training set occupies 80% of the data set and the test set occupies 20% of the data set. To train our model and reach a better training effect, the training set is greater than the testing set. Before we test the model, we should standardize the testing set. When the training set is standardized, the training set does not need to recalculate its own m and s value. It should be standardized with same parameters, which the training set used. Although we randomly divide the data set, we use the same divided test set for different models when testing the model and the same training set is used when training the model. Test indexes and test results are shown in the section 'Experimental results and discussion'. Crafting adversarial examples In this section, we will focus on the adversarial example generation algorithm in the field of APT attack detection. According to the adversarial sample generation algorithm, attackers can fool the APT detection model through simple calculations. The process of using adversarial example generation algorithm to generate adversarial examples is called adversarial attacks. Adversarial attacks are mainly divided into white-box attacks and black-box attacks. In addition, the algorithm we proposed are grey-box attacks. This means that even if attackers have limited information about the target, they can also have adversarial attacks on the target model. Both white-box attacks and black-box attacks are designed to mislead the APT detection model so that it cannot detect the APT traffic from the normal traffic. White-box attack means that the attacker fully understands the internal structure and the training parameters of the model. 13,14,17 Black-box attack means that the attacker only knows the model's corresponding classification results for input examples. 15,16,18,31 In real life, the attacker can neither fully understand the model nor be ignorant of the model. Moreover, the adversarial attack in this article is the grey-box attack, between the black-box attack and the white-box attack. Grey-box attack means that the attacker can understand part of the information of the model. In this article, we assume that the attacker knows the output probability of the model for the input example. Current adversarial attacks are most aimed at the field of computer vision image classification and malware detection. Next, this article will analyse the characteristic of existing adversarial attacks and combine the characteristic of APT traffic to propose the adversarial example generation algorithm in the APT detection field. The Modified National Institute of Standards and Technology (MNIST) dataset is mostly used in the field of image classification. 13,16,38 Each picture in MNIST is a gray-scale image composed of 28 times by 8 pixels. The grey value of each pixel is 0-255. In addition, the CIFAR-10 data set is also used. 31 Each picture in CIFAR-10 is a colour image composed of 32 times 32 pixels. Each pixel in a colour image has three components, R, G and B, and each component ranges from 0 to 255. In the field of image classification, a pixel of the image is a feature. In the field of malware detection, each feature represents a function of the software. 17 Grosse et al. extracted 545,333 features in malware detection. Features extracted are all binary and discrete variables, where 1 means that the software has this function and 0 means that it does not. In addition, the function of a single malware is very limited compared to the entire feature, so the input feature vector of a single malware is very sparse. Grosse only added functions when making adversarial examples because the deletion of functions may affect the offensiveness of malware. In addition, the malware's adversarial example generation algorithm imposed an L1-norm limit on the number of functions modified, which is limited to less than 20. We can find that features of input examples are all discrete values whether in the field of image classification or in the field of malware detection. The range of every feature is also consistent. The range of features in the field of image classification are all 0 to 255 and the range in the field of malware detection are all 0 or 1. By comparing and analysing the characteristics of APT input samples and other fields, we can find that APT traffic has the following characteristics: Before proposing the algorithm for generating adversarial examples, we first explain its principle. The training process of the machine learning model is to find an optimal model parameter u, where u is satisfied to minimize the loss function of the input example. The process can be formulized as shown in Equation (3) u = argmin 1 2m where m represents the number of input examples, h u x i ð Þ À Á represents the probability that the model predicts of the ith input example is positive, y i ð Þ represents the actual label value of the input example. In short, the training process of the machine learning model is to continuously adjust the model parameters u without changing the input example to find a solution that can minimize the loss function of the input example. The algorithm for generating adversarial examples is just the opposite. The algorithm is based on the model that has been trained, that is, the model parameters u are determined. The algorithm increases the loss function by appropriately modifying the input examples. It is as shown in Equation (4) where x represents the adversarial example obtained after the adversarial example generation algorithm is performed on the current input sample x, h u x ð Þ represents the probability that the model predicts of the current adversarial example is positive, y represents the actual label value of the current example. The generated adversarial example x should satisfy that the model's prediction of the adversarial example h u x ð Þ does not match the actual value y as much as possible. Few people will disguise the normal traffic as the APT traffic when considering that in actual situations, so the adversarial example generation algorithm proposed will be only for the traffic, that is, the APT traffic will be disguised as the normal traffic. Combining the characteristics of APT traffic, we propose the adversarial example generation algorithm as shown in Algorithm 1. Our algorithm is based on the linearity of the attacked model. 13 Small disturbances will gather together to successfully attack the model. In the process of generating adversarial examples, we should modify every feature that can be modified as much as possible because the number of features we extract is less than that other literature. 13,16,17,29 We do not modify protocol, port and other features because these features may become floating point values after our algorithm is modified, which does not match the actual situation. As mentioned earlier, the feature's order of magnitude is different. If we use the same amount of modification for each feature, the modification is likely to be insignificant for one feature while completely covering the original value for another feature. In addition, we will calculate the modification amount according to a certain proportion of the original value before we modify the feature. This process is indicated in the fourth line of the algorithm where x i is the original value and d is the modification; we will discuss the different values of a later. In the previous description, we assumed that our attack is the grey-box attack, that is, we can obtain the output probability of the model for the input example. F x i ð Þ in the algorithm is the probability of the input sample being the positive example when the ith feature of the input example is x i . Since the model is a binary classification model, 1 À F x i ð Þ is the probability of the input sample being the negative example when the ith feature of the input example is x i . We select the feature perturbation that can minimize the positive example probability of the input sample as shown in line 5 of the algorithm. Evaluation indexes The accuracy rate cannot measure the effect of our algorithm well due to the imbalance between positive examples and negative examples in our dataset. We also use precision, recall and F1-score to measure our algorithm in addition to the accuracy rate. The formula of each index used is as shown in Equations (5) The evaluation of APT detection models We train four different APT detection models based on the KNN algorithm, the random forest algorithm, the logistic regression algorithm and the SVM algorithm. The learning curve of APT detection models is shown in Figures 2-5. The abscissa is the size of the training set. The ordinates are the accuracy of the training set and the testing set. From Figures 2-5, we can see that the accuracy of the testing set increases as the size of training set increases. Eventually, the accuracy of the testing set and the training set is almost the same, which indicates that our models can detect APT effectively. Next, we evaluate the model according to the evaluation index. The experimental results are shown in Table 1. We can see that accuracy rate of the four models we trained are all high, but precision and recall of them are different. From the perspective of F1-score, we can see the model effects of the KNN model, the random forest model and the SVM model are better than the logistic regression model. The random forest model has the highest F1-score, which is 0.9791, followed by the SVM model whose F1-score is 0.9760 and that of the KNN model is 0.9730. The logistic regression model has the lowest F1-score, which is 0.8167. This is because the logistic regression algorithm is too linear to capture the non-linear characteristics in the data well. To verify the effectiveness of our model for detecting APT attacks, we compare our APT detection models with automated temporal correlation traffic detection system (ATCTDS). 34 To control the variables, we reproduced the ATCTDS model in the same computing environment with the APT detection models. We choose ATCTDS as the comparison model because ATCTDS utilize the same APT example with us. Moreover, part of the features of the training model used in this article are extracted according to ATCTDS. From Table 1, we can see that ATCTDS is weaker than the KNN model, the random forest model and the SVM model but better than the logistic regression model in precision and recall. This proves that the APT detection model we generated, apart from the logistic regression model, are able to detect APT traffic effectively. The evaluation of adversarial examples on APT detection models First, we apply the adversarial example generation algorithm proposed to the APT traffic, that is, the positive example. We do not make adversarial examples for the normal traffic because few people disguise the normal traffic as the APT traffic in reality. Second, we reselect the evaluation index to measure the effect of adversarial examples. We choose the success rate of positive examples classified as positive examples by the APT detection model as the evaluation index. The lower the success rate, the better the effect of the adversarial example generation algorithm's attack. The formulation of the success rate is as shown in Equation (9) success rate = TP TP + FN ð9Þ Although the formula for calculating the success rate is the same with the recall, their meaning is completely different. The recall is one of index to evaluate the performance of the APT detection model, but the success rate in this article is used to evaluate the attack effect of the adversarial example generation algorithm. Table 2. The a is the disturbance rate. We limit the disturbance rate a from 0 to 0.040. We do not modify the initial example when a is 0. When the disturbance rate is 0, the success rate is different 13 We apply small perturbations to each feature of the input example and the accumulation of these small perturbations will eventually bias the model's classification on the input example. We can observe from Table 3. From Comparative experiment To further demonstrate that the adversarial example generation algorithm proposed for the APT traffic is effective, we compare our methods with existing adversarial examples generation methods. In the field of APT detection, no algorithm for generating adversarial examples has been proposed in the previous study. We compare our method with the famous FGSM algorithm in the field of image classification. 13 The attack algorithm of FGSM is as shown in Equation (10) where x means the adversarial example obtained after the FGSM algorithm is performed on the current input sample x, sign r x J x, y ð Þ ð Þis the gradient direction of the model for the current input, e means the step size of the perturbance. The key of FGSM is the value of e. The information of input examples will be covered if the e is too large and the algorithm will not work if the e is too small. By analysing the original data, we can find that most of the values are greater than 0.05 except for few values. We choose 0.005 as the value of e through experiments and data analysis. The difference between FGSM and our method is the value of e. The e of FGSM is fixed, but e of our method is calculated based on the current feature value. The value of the current feature is different for different features and different input examples, so the e is also different. The success rate of FGSM and our method for APT detection models is shown in Table 4. From Table 4, we can see that FGSM only works for the logistic regression model. Both our method and FGSM can successfully attack the logistic regression model due to the high linearity of the logistic regression model. But apart from the logistic regression model, our method also works well for the SVM model and the random forest, but the FGSM does not. To see the comparison more intuitive, we convert Table 4 to Figure 6. Apart from the KNN model being hardly attacked by adversarial examples due to its high non-linearity, our method works very well than FGSM in generating adversarial examples for the APT detection model as can be seen from Figure 6. And the reason why our method works well is because it is proposed based on the characteristics, whereas FGSM is proposed for image classification. The results of the above comparative experiments show that our method of generating adversarial examples for the APT detection model is better than the FGSM method. Moreover, our method can successfully attack the APT detection model through adversarial examples with high attacking success rate. Conclusion In this article, we notice that the model using machine learning methods is highly linearized through literature reading, which makes the model vulnerable to adversarial attacks from adversarial examples. In the field of image classification and malware detection, it has been proved that machine learning models can be attacked by adversarial examples. In the field of detection of APT attacks, machine learning methods are inevitably used whether it is host-based or network-based detection. Based on this, we first train the machine learning model that can detect APT attacks to prove that adversarial examples can also be generated in the field of detection of APT attacks. Then we propose the adversarial example generation algorithm for the APT detection model based on the characteristic of APT traffic and generate adversarial examples according to the algorithm. The decrease in the success rate of the APT detection model for adversarial examples proves that models using machine learning methods are also vulnerable to attacks from adversarial examples in the field of detection of APT attacks. In addition, we also prove that the APT adversarial example is transitive. Finally, we attack the SVM model with the attack success rate of 0.9853, attack the random forest model with the attack success rate of 0.9759 and attack the logistic regression model with the attack success rate of 0.9987. In this article, we propose the adversarial example generation algorithm in the APT field and successfully implement the grey-box attack on the APT detection model according to the algorithm. We prove that the adversarial example appeared due to the high linearization of the attacked model and we also prove that the adversarial example is transitive and based on this, we achieve the black-box attack on the APT detection model. The successful generation of APT adversarial examples indicates that one of our future research directions in the field of APT attack detection will be how to effectively defend against possible adversarial attacks. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
8,615.6
2022-03-01T00:00:00.000
[ "Computer Science" ]
The genome draft of coconut (Cocos nucifera) Abstract Coconut palm (Cocos nucifera,2n = 32), a member of genus Cocos and family Arecaceae (Palmaceae), is an important tropical fruit and oil crop. Currently, coconut palm is cultivated in 93 countries, including Central and South America, East and West Africa, Southeast Asia and the Pacific Islands, with a total growth area of more than 12 million hectares [1]. Coconut palm is generally classified into 2 main categories: “Tall” (flowering 8–10 years after planting) and “Dwarf” (flowering 4–6 years after planting), based on morphological characteristics and breeding habits. This Palmae species has a long growth period before reproductive years, which hinders conventional breeding progress. In spite of initial successes, improvements made by conventional breeding have been very slow. In the present study, we obtained de novo sequences of the Cocos nucifera genome: a major genomic resource that could be used to facilitate molecular breeding in Cocos nucifera and accelerate the breeding process in this important crop. A total of 419.67 gigabases (Gb) of raw reads were generated by the Illumina HiSeq 2000 platform using a series of paired-end and mate-pair libraries, covering the predicted Cocos nucifera genome length (2.42 Gb, variety “Hainan Tall”) to an estimated ×173.32 read depth. A total scaffold length of 2.20 Gb was generated (N50 = 418 Kb), representing 90.91% of the genome. The coconut genome was predicted to harbor 28 039 protein-coding genes, which is less than in Phoenix dactylifera (PDK30: 28 889), Phoenix dactylifera (DPV01: 41 660), and Elaeis guineensis (EG5: 34 802). BUSCO evaluation demonstrated that the obtained scaffold sequences covered 90.8% of the coconut genome and that the genome annotation was 74.1% complete. Genome annotation results revealed that 72.75% of the coconut genome consisted of transposable elements, of which long-terminal repeat retrotransposons elements (LTRs) accounted for the largest proportion (92.23%). Comparative analysis of the antiporter gene family and ion channel gene families between C. nucifera and Arabidopsis thaliana indicated that significant gene expansion may have occurred in the coconut involving Na+/H+ antiporter, carnitine/acylcarnitine translocase, potassium-dependent sodium-calcium exchanger, and potassium channel genes. Despite its agronomic importance, C. nucifera is still under-studied. In this report, we present a draft genome of C. nucifera and provide genomic information that will facilitate future functional genomics and molecular-assisted breeding in this crop species. annotation was 74.1% complete. Genome annotation results revealed that 72.75% of the coconut genome consisted of transposable elements, of which long-terminal repeat retrotransposons elements (LTRs) accounted for the largest proportion (92.23%). Comparative analysis of the antiporter gene family and ion channel gene families between C. nucifera and Arabidopsis thaliana indicated that significant gene expansion may have occurred in the coconut involving Na + /H + antiporter, carnitine/acylcarnitine translocase, potassium-dependent sodium-calcium exchanger, and potassium channel genes. Despite its agronomic importance, C. nucifera is still under-studied. In this report, we present a draft genome of C. nucifera and provide genomic information that will facilitate future functional genomics and molecular-assisted breeding in this crop species. Data Description Background Coconut palm (Cocos nucifera, 2n = 32), the only species in the genus Cocos in the family Arecaceae, is a tropical oil crop and widely cultivated in tropical regions due to its extensive application in agriculture and industry. Coconut palm is thought to have originated from the Southwest and Western Pacific region (including the Malay Peninsula and Archipelago, New Guinea, and the Bismarck Archipelago). At present, this tropical tree crop is distributed across 93 tropical countries [2], including Central and South America, East and West Africa, Southeast Asia, and the Pacific Islands, and is grown over 12 million hectares of land [1]. In China, coconut palm grows in the subtropical regions-Hainan and Yunnan provinces-as an economic and ornamental plant. Coconut palm is cultivated over approximately 43 000 hectares in Hainan, with the "Hainan Tall" (HAT) variety covering 36 000 hectares [3]. The HAT coconut needs 8-10 years to enter its reproductive stage and has a height of 20-30 meters, with a medium to large sized nut. The HAT cultivar is highly tolerant to salt and drought stress, but sensitive to temperatures below 10 • C. Coconut palm can disseminate through ocean currents: floating nuts sprout and grow naturally upon washing up on beaches. The ability to adapt to a high-salt environment is closely related to this dissemination feature and to these natural growth conditions. The morphological characteristics of the HAT cultivar are shown in Fig. 1. Here, we present the genome sequence of the Hainan Tall coconut and an analysis of the antiporter and ion channel gene families, relevant to salinity tolerance. As draft genome sequences of coconut relatives (e.g., Elaeis guineensis [4] and Phoenix dactylifera [5,6]) have previously been reported, we also performed a comparative analysis between the coconut and these relative species for genome assembly and annotation characteristics. Sample collection and sequencing strategy The genomic DNA was extracted from the spear leaf of an individual of the variety "Hainan Tall" coconut (Cocos nucifera L. Taxonomy ID: 13 894; 19 • 33'3"N, 110 • 47'25"E) from the coconut garden of the Coconut Research Institute (Wenchang, Hainan Province, China) by using the CTAB extraction method [7]. Subsequently, 4 paired-end (PE) libraries with insert sizes of 170 bp, 500 bp, 450 bp, and 800 bp and 5 mate-pair (MP) libraries with insert sizes of 2 Kb, 5 Kb, 10 Kb, 20 Kb, and 40 Kb were constructed using the standard procedure provided by Illumina (San Diego, CA, USA). After library preparation and quality control of the DNA samples, template DNA fragments were hybridized to the surface of the flow cells on an Illumina HiSeq2000 sequencer, amplified to form clusters, and then sequenced by following the standard Illumina manual. Finally, we generated 714.67 Gb of raw reads from all constructed libraries. The raw outputs for each sequenced library are summarized in Table 1. Before assembly, the raw reads were pretreated using the following stringent filtering processes via SOAPfilter (v2.2) [8] software: (1) removed reads with 25% low-quality bases (quality scores ≤7); (2) removed reads with N bases more than 1%; (3) discarded reads with adapter contamination and/or polymerase chain reaction duplicates; (4) removed reads with undersized insert sizes. Finally, 419.08 Gb (estimated 173.17 × read depth) of high-quality sequences were obtained for genome assembly. De novo assembly of short reads of Cocos nucifera We used 209.38 Gb of clean reads of the short-insert libraries (excluding the 450-bp library) to estimate the coconut genome size by k-mer frequency distribution analysis [8]. The genome size (G) of Cocos nucifera could be estimated by the following formula: where N represents the total of number of reads, L represents the read length, K represents the k-mer value used in the analysis, and K depth refers to the main peak in the k-mer distribution curve. In our calculations, N was 2 049 520 223, L was 100, and K depth was 71 for K = 17. As a result, the Cocos nucifera genome was estimated to be 2.42 gigabases (Gb). K-mer size distribution analysis (Fig. 2) indicated that Cocos nucifera was a diploid species with low heterozygosity and a high proportion of repetitive sequences. The sequencing depth is shown in parentheses, calculated based on a genome size of 2.42 G. Clean data were obtained by filtering raw data with low-quality and duplicate reads. We then assembled the Cocos nucifera genome using the software SOAPdenovo2 (SOAPdenovo2, RRID:SCR 014986) in 3 steps: contig construction, scaffold construction, and gap filling. In the contig construction step, the SOAPdenovo2 was run with the parameters "pregraph -K 63 -R -d 1" to construct de Bruijn graphs from paired-end libraries with insert sizes ranging from 170 to 800 bp. The k-mers from the de Bruijn graphs were then used to form contiguous sequences (contigs) with the parameters "contig -R" by clipping tips, merging bubbles, and removing low-coverage links. In the scaffold construction step, the orders of the contigs were determined by using paired-end and mate-pair information with parameters "map -k 43" and "scaff -F -u". In more detail, SOAPdenovo2 maps the reads from pairedend and mate-pair libraries to contigs based on a hash table (keys are unique k-mers on contigs; values are positions). In such cases, 2 contigs are considered to be linked if the bridging of the contigs is supported by 5 paired-end read pairs or 3 mate-pair read pairs. In the gap filling step, gaps within scaffolds were filled by utilizing KGF [8] v1.06 and GapCloser v1.12-r6 (GapCloser, RRID:SCR 015026) [8] with paired-end libraries (hav-ing an insert size from 170 to 800 bp in cases, where 1 end could be mapped to 1 contig and the other end extended into a gap). To optimize the assembled sequence, Rabbit (a Poisson-based k-mer model software [9]) was used to remove the redundant sequences. A final length of 2.20 Gb for the scaffolds was obtained and used for further analysis, accounting for 90.91% of the predicted genome size and larger than the African oil palm and date palm genomes (Table 2). Meanwhile, the N50 of the obtained contigs was 72.64 Kb and 418.06 Kb for the scaffolds, which have excluding scaffolds of less than 100 bp. The comparison of N50 values for the assembled coconut genome and for the 4 previously published palm genomes Elaeis guineensis [4], Elaeis oleifera [4], Phoenix dactylifera (PDK30) [5], and Phoenix dactylifera (DPV01) [6] is listed in Table 2. Genome evaluation The 57 304 unigenes (transcript obtained from 3 different tissues, spear leaves, young leaves, and fruit flesh), as previously reported by Fan et al. [10], were aligned to the assembled genome of Cocos nucifera using BLAT (BLAT, RRID:SCR 011919) [11] with default parameters. The alignment results indicated that the assembled genome of Cocos nucifera covered 96.78% of the expressed unigenes, suggesting that a high level of coverage has been reached for the assembled genome (Table 3). We also evaluated the level of genome completeness for the assembled sequences by using BUSCO v2.0 (BUSCO, RRID:SCR 015008) [12], which quantitatively assesses genome completeness using evolutionarily informed expectations of gene content from near-universal single-copy orthologs selected from OrthoDB v9 (OrthoDB, RRID:SCR 011980; plant set) [13]. BUSCO analysis showed that 90.8% and 3.4% of the 1440 expected plant genes were identified as complete and fragmented genes, respectively, while 5.8% of genes were considered to be missing from the assembled coconut genome sequence. The comparative results of the BUSCO estimation in the coconut and in the 4 other palm genome sequences indicates that the smallest fraction of missing genes as predicted by BUSCO was found in the coconut genome assembly (Table 4). Repeat annotation We combined homology-based annotation and a de novo method to identify transposable elements (TEs) and the tandem repeats in the Cocos nucifera genome. In the homology-based annotation step, TEs were identified by searching against the Repbase library (v20.04) [14] with RepeatMasker (v4.0.5; Gene prediction We combined 3 strategies to predict genes in the Cocos nucifera genome: homology-based, de novo, and transcript alignment. For homology-based annotation, the protein sequences of Arabidopsis thaliana [18], Oryza sativa [19], Sorghum bicolor [20], Zea mays [21], Elaeis guineensis, and Phoenix dactylifera (DPV01) were downloaded from each corresponding source (see "Availability of data sources"). The coconut genome was aligned against these downloaded databases using TBLASTN [22] with parameter "-e 1e-5 -F -m 8" and BLAST results were processed by solar (v0.9) with parameter "-aprot 2 genome2 -z" to determine the candidate gene loci. Next, we extracted the genomic sequences of candidate gene loci, along with 1 kb of flanking sequences, and applied GeneWise 2.2.0 (GeneWise, RRID:SCR 015054) [23] to define the intron-exon boundaries. The genes with pre-stop codon or frame-shifts were excluded from further analysis. For de novo prediction, we randomly selected 1000 fulllength genes (GeneWise score equal to 100, intact structure: start codon, stop codon, perfect intron-exon boundary) from gene models predicted by homology-based methods to train the model parameters for AUGUSTUS 2.5 (Augustus: Gene Prediction, RRID:SCR 008417) [24]. Two software programs, AUGUSTUS 2.5 and GENSCAN (GENSCAN, RRID:SCR 012902) 1.0 [25], were used to do de novo prediction on the repeat-masked genome of Cocos nucifera. Genes with incomplete structure or a protein coding length of less than 150 bp were filtered out. Subsequently, genes from both homology-based and de novo methods were combined to obtain non-redundant gene sets by using GLEAN [26] with the following parameters: minimum coding sequence length of 150 bp and maximum intron length of 50 kb. Genes were filtered with the same thresholds as were used for homology-based annotation. For transcriptome-based prediction, RNA-seq data (SRR606452), as previously reported by Fan et al. [10], were mapped onto the coconut genome to identify the splice junctions using the software TopHat v2.1.1 (TopHat, RRID:SCR 013035) [27]. The software Cufflinks v2.2.1 (Cufflinks, RRID:SCR 014597) [28] was then used to assemble transcripts with the aligned reads. The coding potential of these transcripts was identified using a fifth-order Hidden Markov Model, which was estimated with the same gene sets used in AU-GUSTUS training by train GlimmerHMM, an application in the GlimmerHMM package (GlimmerHMM, RRID:SCR 002654) [29]. The transcripts with intact open reading frames (ORFs) were extracted, and the longest transcript was retrieved as a representative of a gene from multiple transcripts on the same locus. Finally, we merged the GLEAN and the transcriptome result to form a comprehensive gene set using an in-house annotation pipeline with the following steps: first, all-to-all BLASTP analysis of protein sequences was performed between GLEAN results and transcript assemblies, with an E-value cutoff of 1e-10. These transcript assemblies were added to the GLEAN result to form untranslated region (UTRs) or alternative splicing products, depending on whether the coverage and identity of the alignment results reached 0.9 or not. If the transcript assemblies had no BLAST hit with the GLEAN results, these transcript assemblies were added to the final gene set as a novel gene. The protocol for integrating GLEAN and transcriptome data is shown in Fig. 3. Gene evaluation The annotation processes identified 28 039 protein-coding genes ( Table 2), which is less than the predicted gene numbers of Phoenix dactylifera (PDK30, 28 889), Phoenix dactylifera (DPV01, 41 660), and Elaeis guineensis (34 802). Meanwhile, the BUSCO evaluation showed that 74.1% and 11.2% of 1440 expected plant genes were identified as complete and fragmented, with 14.7% of genes considered missing in the gene sets. The BUSCO results showed that our gene prediction was more complete than that of Phoenix dactylifera (PDK30) and Elaeis guineensis, but less complete than that of Phoenix dactylifera (DPV01) ( Table 6). Gene family construction Protein sequences of 13 angiosperms, including Elaeis guineensis, Phoenix dactylifera (DPV01), Sorghum bicolor, Prunus persica, Solanum tuberosum, Glycine max, Arabidopsis thaliana, Theobroma cacao, Vitis vinifera, Musa acuminata, Carica papaya, Populus trichocarpa, and Amborella trichopoda, were downloaded from each corresponding ftp site (see "Availability of data sources"). For genes with alternative splicing variants, the longest transcripts were selected to represent the gene. The gene numbers of Elaeis guineensis and Phoenix dactylifera (DPV01) were greatly different from the research paper published in 2013 [4,6], because genes of these 2 species were re-predicted using the NCBI Prokaryotic Genome Annotation Pipeline, which seemed to be more reasonable. Similarities between paired sequences were calculated using BLASTP with an E-value threshold of 1e-5. OrthoMCL (OrthoMCL DB: Ortholog Groups of Protein Sequences, RRID:SCR 007839) [41] was used to identify gene family based on the similarities of the genes and a Markov Chain Clustering (MCL) with default parameters. About 79.80% of Cocos nucifera genes were assigned to 14 411 families, of which 282 families only existed in Cocos nucifera (coconut specific families) ( Table 7). Fig. 4 shows the shared gene families for orthologous genes. There are 544 orthologous families shared by 5 monocot species and 7706 orthologous families shared by all monocot and dicot species, suggesting 544 monocot unique functions shared by 5 monocot species and 7706 ancestral functions in the most recent common ancestor of the angiosperms. Phylogenetic analysis We extracted 247 single-copy orthologous genes derived from the gene family analysis step, and then aligned the protein sequences of each family with MUSCLE (v3.8.31; MUSCLE, RRID:SCR 011812) [42]. Next, the protein alignments were converted to corresponding coding sequences (CDS) using an inhouse Perl script. These coding sequences of each single-copy gene family were concatenated to form 1 super gene for each species. The nucleotides at positions 2 (phase 1 site) and 3 (4 degenerate sites) of codon were extracted separately to construct the phylogenetic tree by PhyML 3.0 (PhyML, RRID:SCR 014629) [43] using a HKY85 substitution model and a gamma distribution across sites. The tree constructed by phase 1 sites was consistent with the tree constructed by 4 degenerate sites. Divergence time The Bayesian relaxed molecular clock approach was used to estimate species divergence time using MCMCTREE in PAML (PAML, RRID:SCR 014932) [44], based on the 4 degenerate sites and the data set used in phylogenetic analysis, with previously published calibration times (divergence between Arabidopsis thaliana and Carica papaya was 54-90 Mya, divergence between Arabidopsis thaliana and Populus trichocarpa was 100-120 Mya) [45]. The divergence time between coconut and oil palm is about 46.0 Mya (25.4-83.3 Mya) (Fig. 5), which is less than the divergence time between coconut and date palm. Identification of antiporter genes in coconut genome Antiporters are transmembrane proteins involved in the exchange of substances within and outside the membrane. In Arabidopsis, the functions of antiporter genes have been well characterized experimentally, and this gene family was subdivided into 13 different functional groups. Among them, 3 functional clusters were involved in Na + /H + antiporters, some of which were documented to be associated with salt tolerance [46,47]. The amino acid sequences of 70 antiporter genes of Arabidopsis were downloaded from the Arabidopsis Information Resource TAIR website (TAIR, RRID:SCR 004618) [48] and used as queries for BLASTP against the predicted proteins in the Cocos nucifera genome with a cut-off E-value of 1e-10. A total of 126 antiporter genes were identified in coconut genome. Using local Hidden Markov Model-based HMMER (v3.0) searches and the Pfam database, 7 antiporter genes were excluded from further analysis because of the lack of conserved domain. The detailed information of the 119 antiporter genes is listed in Additional file 1. In order to elucidate the evolutionary relationship and potential functions of the antiporters identified in the study, we applied phylogenetic analysis of Arabidopsis and C. nucifera antiporter proteins using the neighbor joining method (Fig. 6). Phylogenetic analysis showed that the 119 antiporter genes from C. nucifera can be subdivided into 12 groups and that almost all antiporter genes were clustered together with the functional groups in Arabidopsis thaliana. Phylogenetic analysis showed that the number of antiporter genes was equal between Arabidopsis thaliana and C. nucifera for most groups, except for G1 (1 of 3 Na + /H + antiporter family), G3 (carnitine/acylcarnitine translocase family), and G12 (potassium-dependent sodium-calcium exchanger). The G1 group (1 of 3 Na + /H + antiporter families) contained only 1 Arabidopsis antiporter gene and but 14 C. nucifera antiporters (1-At/14-Cn), whereas G3 (carnitine/acylcarnitine translocase family) contained 1-At/29-Cn, and G13 (potassium-dependent sodium-calcium exchanger) contained 3-At/11-Cn. The Na + /H + antiporter family had been reported to be associated with salt stress. The expansion of the Na + /H + antiporter gene family in the coconut palm may be associated with the high salt tolerance of coconut. Meanwhile, carnitine/acylcarnitine translocase is involved in fatty acid transport across the mitochondrial membranes. This gene family expansion may be associated with accumulation of fatty acid in coconut pulp. Moreover, coconut water contains a high density of potassium ion, approximately 312 mg potassium ion per 100 g of coconut water [49]. In this study, the gene number of potassium-dependent sodiumcalcium exchangers was also detected to be significantly increased compared to Arabidopsis. Identification of ion channel genes in coconut genome A total of 67 ion channel genes were identified in the coconut genome (Additional file 2). The amino acid sequences of 67 C. nucifera and 60 Arabidopsis ion channel genes were used to analyze their evolutionary relationship (Fig. 7). Almost all ion channel genes from C. nucifera can be clustered into the function groups found in Arabidopsis thaliana. The number of ion channel genes was equal between Arabidopsis thaliana and Cocos nucifera in most groups except for G5 (potassium channel). Many more genes (21) from C. nucifera than from Arabidopsis thaliana (9 genes) were present in group 5 (potassium channels). The gene family expansion may be associated with the accumulation of potassium ions in coconut water. Conclusion Cocos nucifera (2n = 32) is an important tropical crop, and it is also used as an ornamental plant in the tropics. In the present study, we sequenced and de novo assembled the coconut genome. A total scaffold length of 2.2 Gb was generated, with scaffold N50 of 418 Kb. The divergence time of Cocos nucifera and Elaeis guineensis is more recent than that of Cocos nucifera and Phoenix dactylifera, suggesting a closer relationship between C. nucifera and E. guineensis. Comparative analysis of antiporter and ion channels between C. nucifera and Arabidopsis thaliana showed significant gene family expansions, maybe involving Na + /H + antiporters, carnitine/acylcarnitine translocases, potassium-dependent sodium-calcium exchangers, and potassium channels. The expansion of these gene families may be associated with adaptation to salt stress, accumulation of fatty acid in coconut pulp, and potassium ions in coconut water. The data output of the coconut genome will provide a valuable resource and reference information for the development of highdensity molecular makers, construction of high-density linkage maps, detection of quantitative trait loci, genome-wide association mapping, and molecular breeding.
5,023.2
2017-10-05T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Interferometric 3 D tracking of several particles in a scanning laser focus Abstract: High-Speed tracking of several particles allows measuring dynamic long-range interactions relevant to biotechnology and colloidal physics. In this paper we extend the successful technique of 3D back-focal plane interferometry to oscillating laser beams and show that two or more particles can be trapped and tracked with a precision of a few nanometers in all three dimensions. The tracking rate of several kHz is only limited by the scan speed of the beam steering device. Several tests proof the linearity and orthogonality of our detection scheme, which is of interest to optical tweezing applications and various metrologies. As an example we show the position cross-correlations of three diffusing particles in a scanning line optical trap. Introduction In biotechnology and modern cellbiology a strong interest exists in the observation of intra cellular transport processes e.g. the diffusion and interaction of vesicles.Transport processes within a biological cell for example, depend on the hydrodynamic environment and changes in the viscosity close to walls, fibers and membranes.Thus bio-microrheology developed into an important research area [1] during recent years. Optical tweezers serve as a useful tool to observe processes steered by Brownian motion and thus gained importance in microrheology and interaction measurements.A variety of new experiments became possible due to the tweezers' ability to trap and manipulate nanoscopic objects with optical forces. In contrast to static interaction measurements [2,3], experiments in the field of microrheology are often designed to measure dynamic interactions (e.g.hydrodynamic coupling) between two or several particles at high temporal resolution.It is therefore necessary to trap and track at least two particles at the same time.Common approaches are to use a line-trap or a twin-trap.The latter consists of two separated, orthogonally polarized beams, each of which is focused to a single point trap [4,5].Due to the high NA trapping lens used for focusing, a cross-talk between both polarization directions is introduced and position measurements need to be corrected for these correlations [6].A line trap can also be created from a hologram [7], where.particle positions are then analyzed by video tracking. Although acquisition speed increased with modern CMOS cameras [8], temporal resolution and accuracy are still limited -for technical and physical reasons.Additionally video tracking is mainly limited to two dimensions and thus often requires squeezing particles between two interfaces to minimize axial particle movements or fluctuations [7].Although, progress in 3D, single particle tracking has been reported [9,10], these methods are less flexible since they require recording calibration curves for each particle type before measurements and subsequent 2D curve fitting.Likewise holographic video tracking requires intensive post-processing [11] and has not yet proven its precision when particles are very close to each other. Another approach is to rapidly scan a laser focus along a line.This effectively creates an optical potential, which is very smooth in direction of the line scan, but steep in the other two dimensions [12][13][14].The effective potential depth can be either controlled by locally varying the laser scan speed or by changing the local laser intensity.This technique has proven to be suitable for precise particle interaction measurements [2,3].However, slow video tracking limits applications to static interaction experiments.Therefore a very fast and precise 3D tracking technique such as back focal plane (BFP) interferometry is required, which mainly has been used for static point traps [15][16][17].First attempts to extent this tracking method to scanning line optical tweezers have been successfully realized in one [18] or two [19] dimensions.A dynamic measurement bandwidth of up to 40 kHz was achieved with accousto-optic beam deflectors [18].In this article we show, how BFP interferometry with oscillating lasers can be improved and extended to three dimensions, which enables more realistic studies of dynamic bio-molecular and colloidal interactions. This paper is structured as follows: In section 2, the experimental setup is described.The third section describes signal generation and data processing.Section four shows position traces and histograms.How to characterize the detector responses is described in part 5, followed by a detailed analysis of reconstruction and detection errors in part 6. Finally section 7 concludes with the hydrodynamic coupling between three 970 nm sized silica spheres. Experimental configuration The experimental setup can roughly be divided into a manipulation unit consisting of a scanning line optical tweezers, and an inline interferometric tracking unit with two quadrant photodiodes (QPDs) (see Fig. 1).A more detailed description of the instrumental setup can be found in [16] for a single QPD.The beam from a 1 Watt Nd:YAG laser (λ=1064nm, IRCL-1000-1064-S, Crystal Laser, Reno, NV) is intensity modulated by an acousto-optic modulator (AOM, AA.MT.110/a1.IR, Pegasus Optics, Wallenhorst, Germany) and the first order beam is deflected by two galvanometric scanning mirrors (SM, M2, General Scanning Inc., Watertown, MA).About 3% of the first order laser power is deflected onto a reference diode (InGaAs PIN-QPD, G8370 φ=1mm, Hamamatsu Photonics, Japan) to stabilize the laser power by an electronic feedback (TEM Messtechnik GmbH, Großer Hillen 38, 30559 Hannover, Germany), not shown in Fig. 1.NIR corrected lenses (L1, L2) translate the rotation of the XY scanning mirrors together with a water immersion microscope objective lens (OL, UPLAPO60X/IR NA 1.2, Olympus, Japan) into a lateral shift of the focused trapping beam.Two beam expanders (2X and 4X, not shown), one placed directly behind the AOM and the other after the lenses L1 and L2, lead to a 150 % over-illumination of the BFP of the objective lens.The beam is then focused by the OL into an open chamber, consisting of a cover slip with a fluid solution on top.The maximum laser power reaching the fluid is approximately 150mW.A water immersion dipping lens (63X Achroplan, 0.9 water, 44069, Carl Zeiss, Germany) mounted opposite to the objective lens (OL), serves both as condenser for brightfield illumination and as detection lens (DL) for scattered and unscattered laser light. The BFP of the detection lens is imaged onto two quadrant photodiodes (QPD1, QPD2, InGaAs PIN-QPD, G6849, Hamamatsu Photonics, Japan) with different magnifications, by the lenses L3-L5.Over-illumination of one QPD increases both sensitivity and linear detection range for particle z-displacements [20,21].A non-polarizing beam splitter (BS) deflects light of equal power to both QPD diodes, recording the interference pattern between unscattered and forward scattered light.The QPD signals are electronically amplified (Öffner, MSR-Technik, Plankstadt, Germany) with a 3dB cut-off frequency of 0.85 MHz.A specimen placed in the object plane can be moved accurately in xyz-direction with a piezo scan table (not shown, Tritor 102 cap, Piezosystem Jena, Germany).A minimum step size of 1.2 nm can be achieved in all three directions. Typically, the scanning length of the laser focus is L = 10 µm in x-direction and can be swept with a frequency of currently up to R = 1 kHz.The shape of the trapping potential V(x) within the sample, is only determined by the AOM transmission A(x), when the laser focus, i.e. the point trap, is displaced by x tr (t) along a line at constant velocity v x (t) = 2⋅L⋅R.In our case we applied a saw-tooth shaped signal to the x-scanning mirror as shown in Fig. 1.Simultaneously the laser power |E i | 2 is modulated by the AOM with a Gaussian shaped is about 2σ = 4µm.The transmitted intensity A(±x tr,max )⋅|E i | 2 is nearly zero at the turning points ±x tr,max . Sample preparation We use an open chamber as the sample cell, consisting of a fully transparent cover slip of 150 µm thickness.200µl of ultra pure H 2 O together with 2µl of 1:1000 bead solution are added on top (SiO 2 with natural hydroxyl or silanol (Si-OH) surface groups, Bangs Laboratories, Inc., 9025 Technology Drive Fishers, IN 46038-2886).The nominal bead diameter is 970 nm with a standard deviation <10%, a refractive index of 1.37 and a density of 1.96g/cm 3 .Experiments are performed at room temperature (23 °C). Principles of position detection Although we are still in the very desirable situation that probing and trapping beam are identical, the position detection of a particle in a scanning line trap is more complex than in a in a static point trap, where A(x tr ) = const.and x tr = const.. Signal generation The interference intensity I(k x ,k y ,b) between scattered and unscattered light is recorded with a QPD located in a conjugate plane of the detection lens BFP (coordiantes k x and k y ).Ẽ i (k x ,k y ,x tr ) and Ẽ s (k x ,k y ,x tr ,b) denote the angular spectrum of the focused incident electric field and the field scattered at the diffusing particle at position b(t), relative to the trapping focus displaced by x tr (t).We assume a total interference intensity from the incident field Ẽ i and the scattered field Ẽ s (b),: Here we further approximate that in the focal region a small displacement b j of the particle results in a phase shift ∆φ j (b j ) only in this direction j (j = x,y,z) [16].The Gouy-phase shift (also phase anomaly), inherent in divergent or convergent light, produces a phase shift ∆φ z (b z ) which is linear with the axial bead position b z .At this stage, we further assume no interference between the scattered fields of N≥2 particles, which is reasonable when bead diameter D and focus width are the same. The BFP-intensity in Eq. ( 1) generates the position signal S′ = (S x ′, S y ′, S z ′)= S′(b,x tr ), which changes with trap position x tr (t) The PIN diode signals S′ m (m = 1…4) are all summed up to obtain the z-position signal S z ′(b,x tr ) and are connected such that the difference of a pair of adjacent diodes provides the lateral signals S x ′(b,x tr ) and S y ′(b,x tr ).The spatial filter function difference of two adjacent PIN diodes.One also has to consider, that due to the back and forth motion of the trap the positions of each particle are horizontally flipped between consecutive scans, as shown by the negative and positive slope of x tr (t) (black line). Processing time series We developed an algorithm which enables us to track two or more diffusing particles in the line tweezers, in three dimensions and with high accuracy. The basic idea of the tracking algorithm is to differentiate the bipolar lateral time signal S x ′(t), in order to obtain peaks at the corresponding particle centers.The temporally resolved lateral peaks are then converted to spatial positions b x in nm according to x b ( ) 2 L R = ⋅ ⋅ ⋅ t t .In other words, only the peak positions are relevant for the detection of the particle position b x .The peak height, which is also modulated by the trapping intensity via the AOM does not influence the result.To increase precision, the S x ′(t) peaks are fitted by a Gauss function before mapping them onto the mirror positions. In contrast, the particle positions b y and b z are extracted only from the peak amplitudes.As shown in Eq. ( 1), these signals are superimposed by the overall intensity variation of the AOM, i.e.A(x tr )⋅|Ẽ i (x tr )| 2 .To obtain the latter, a so called "empty-scan" S es ′(t), i.e. without particles, is performed.In a first step during post processing the empty-scan is subtracted from the raw data according to: The result of the operation in Eq. ( 3) is shown for the z-signal in Fig. 2 right.The axial positions b z of both particles can be reproduced clearly from the peak heights.The peak height is proportional to the axial position b z of the scatterer and only exists due to the Gouy phase shift.In a second step a Gauss function is fitted to each peak, in order to obtain an accurate measure of the peak height.Finally the fit parameters are divided by the corresponding AOM intensity, which is recorded by the reference diode.The same proceeding applies to the Y signal analysis.The extraction of the 3D position b can be summarized as follows: The time series are processed in the local time interval 2∆t < (2⋅R) -1 around the particle position b at time t 0 .g yy and g zz are detector calibration factors and are discussed in the next section.The function extval denotes the extremal value, the function minpos determines the position with the steepest slope of S x '(t), which is effectively the root of S x '(t), but avoids integrating a changing background intensity by taking the differential d / dt instead. Particle diffusion in an effective optical potential Particles will diffuse in a time averaged optical potential which is mainly determined by optical gradient forces.Kicks from the passing trap should be negligible for sufficiently fast displacements at v x (t) = 2⋅L⋅R.If we assume a intensity distribution in the focus |E i (x)| 2 = I(x) = I 0 ⋅exp(-x²/∆ x ²)) in lateral direction, with a half focus width ∆ x = 0.61⋅λ/NA defined by the NA of the trapping lens, the gradient force F grad (x) of a point trap can be approximated for small spheres with diameter D << λ as with polarizability α and speed of light c/n in medium with index n = 1.33.For small particle displacements b x in the trap center, the linear approximation F grad = -κ x ⋅b x is justified.From Eq. ( 5) we find κ x ≈ I 0 ⋅α⋅n/(c⋅∆ x 2 ) for the point trap at position x tr .Also for the oscillating trap, one expects an effective three-dimensional optical trapping potential, V eff (b), which is elongated but harmonic in all three directions.The stiffnesses of V eff (b) are κ x,eff , κ y and . Whereas κ y and κ z are mainly defined by the point trap and change only with the laser power ~ A⋅I 0 , the x-stiffness κ x,eff ~ (d²/dx tr ²)A(x tr )| xtr=0 can in addition be modified by the second derivative of the transmission function A(x tr = v x ⋅t) at the point of maximum transmission x tr = 0.The effective trapping potential along x can be averaged over a half scan period (2R) -1 provided that m/γ << (2R) -1 as follows: Here γ = 3π⋅D⋅η is the viscous Stokes drag and x ɺ the particle velocity.For quantitative analysis, it is necessary to calibrate both, the optical trap and the detection system.In other words, the relation between optical forces F(b) and particle displacement b have to be determined, as well as the relation between detector responses S(b) and the particle displacement b.Therefore, we used the Langevin method described in detail in [16] under the assumption of a linear response such that F i (b i ) = κ ii ⋅b i and S i (b i ) = g ii ⋅b i where κ ii ⋅ and g ii are diagonal matrices for the trap stiffness and the detector sensitivity, respectively (i = x,y,z).For a properly aligned optical system and well chosen spatial filters and lenses, this approximation is justified across the trapping volume.That means the contour lines of experimentally obtained detector responses S x, S y and S z are orthogonal to each other.The experimental procedure for measuring the detector responses is sketched in Fig. 5. Measuring the detection response A particle is fixed on a coverslip due to an increased ion concentration (which enables binding of the bead to the coverslip by Van de Waals forces) and moved axially through the scanning focus.In addition, a lateral motion of the piezo in y-direction can be superimposed (meander scan).For known particle positions b, the signal response S y (0,b y ,b z ) and S z (0,b y ,b z ) can obtained alternatively to the Langevin method.However, the contour plots in Fig. 4 show in addition the quality of the tracking procedure.A line profile (blue line) through S z , indicates the linear detection range (thick black line) and the slope corresponds to the sensitivity g ZZ .Additionally the central trapping position of a particle is indicated in the contour plot (grey circle).This position appears to be in the center of the linear detection range.Within the linear region the contour lines of S y and S z intersect each other perpendicularly.This corresponds to a diagonal sensitivity matrix g ii . In the case of a line trap, the sensitivity matrix should remain constant over the extension of the scan range.As a control the described analysis of the detector sensitivity needs to be done at different positions in the trap.The resulting detector responses at three different positions (x A , x B , x C ) are given in Fig. 3µm away from position x A , which corresponds to the trap center.Point x A was determined by trapping a single particle in solution while approaching the surface of the coverslip slowly in axial direction, until the particle attached.Positions x B and x C are then reached by moving the piezo stage laterally.At each of the three positions (x A , x B , x C ) a data cube was recorded by moving the fixed particle in 110nm steps by the piezo actuator in z-axial and y-lateral direction through the scanning line trap.Fig. 5 shows the post processed detector responses S x and S z at the different positions after normalizing with the relative laser powers A(x A ), A(x B ), and A(x C ) .The slopes of each response at points x A and x C coincide relatively well with each other and do not show strong deviations.The signals S y (y) and S z (z) at position x B have a slightly reduced sensitivity likely due to a non-optimal optical alignment, which however does not affect the overall tracking precision due to a small probability density at this position. Position accuracy Finally, to fully characterize the tracking method, several statistically independent sources of errors were considered, as there are: Electrical and mechanical noise, optical alignment and interference contrast and inaccuracies due to the sample rate used for data acquisition. Mechanical stability and electrical noise are analyzed in Fig. 6(a).A particle fixed on a coverslip is moved by successive steps of the piezo actuator in axial (z) and lateral (x, y) direction.As a measure of precision, the standard deviation of the reconstructed steps σ i = g ii -1 ⋅σ Si is calculated (i = x, y, z).For better illustration the reconstructed particle positions are superimposed by a sliding average.Piezo driven steps of 10nm and 5 nm are easily resolved in x, y and z direction, respectively.A standard deviation of σ x = 7.5 nm is assumed to be mainly due to the positioning noise of the galvanometric mirrors.A σ y = 1.1 nm in y-direction is about that of a point trap.The z-precision (σ z = 2.3 nm) is also close to that value. Optical alignment and interference contrast: The position reconstruction, depending on the position of the particle in the trapping volume, was studied.For this purpose axial and lateral piezo steps b j are compared with reconstructed steps S j /g jj in the linear range.The difference δb j = b j -S j /g jj between the actual particle positions and the reconstructed positions is calculated and shown with histograms of position errors δb x , δb y and δb z . in Fig. 6(b).In all three directions the distributions are slightly shifted towards positive errors, indicating that the displacements are underestimated by the detector.Similar findings were made for a point trap [22].The (1/e) widths of the distributions are similar to those found for the mechanical instability and electronic noise. Inaccuracies due to the sample rate: A systematic error is introduced in the reconstruction algorithm by the sample rate f of the data acquisition system.The number of sample points per lateral trap displacement is reduced at increased scan speeds v tr .However, at some point the shape of the detector signal is influenced and the precision of the Gaussian fits will degrade.In order to quantitatively choose the proper rate, one data set of two diffusing particles was between the particles was calculated from the reconstructed positions.The the result at maximum sample frequency f max = 400 kHz was subtracted to obtain the error d(f) = ∆b(f)-∆b(f max ).Fig. 6(c) shows the dependence of the distance error d on f.For sample rates smaller than f = 150 kHz d increases strongly due to imprecise Gaussian fitting.We therefore decided to record our data at a sample rate of f = 200 kHz.At higher sample rates f the error d still appears to scatter, mainly due to an inaccuracy in the z signal fit. In particle video tracking difficulties arise, when the diffusing particles come in very close proximity to each other [23].The tracking method introduced here is supposed to reliably track also adjacent or even slightly overlapping particles without significant cross talk in the reconstructed positions.To proof this robustness, two particles are tracked simultaneously with one being fixed on a coverslip, while the other diffusing freely in the optical potential.During tracking, the coverslip is step wise moved upwards, such that at some point the diffusing particle is also pushed upwards by the approaching coverslip.The reconstructed positions of the two particles are shown in Fig. 7(a).The axial 50 nm steps of the fixed particle are clearly resolved, and the reconstruction is almost independent on where the second particle is located.However a small influence becomes apparent when the spheres are in direct contact and both scattered fields E S1 and E S2 contribute to the interference at the QPD [24], such that Eq. ( 1) changes to Therefore, we investigated in a next step, how strongly the tracking precision in the xdirection was affected by an overlap ∆x of two adjacent particles.The context is illustrated in Fig. 8 performed: Starting from zero overlap ∆x = 0, two single particle detector response curves are superimposed with one bead diameter D apart and summed up to form the overall two particle signal.The tracking algorithm is then used to measure the distance between them.Comparing the measured distance with the simulated signal shift, defines a reconstruction error.Successively reducing the signal shift (decreasing ∆x), results in the curve given in Fig. 7(b).An overlap of more than 100 nm in the x-lateral direction leads to an exponential increase in the reconstruction.However, we want to emphasize that these positions from imprecise signals occur only very rarely for line traps with trap stiffnesses as used in this paper.It is apparent that taking only ∆b x underestimates the center to center distance (mimimal ∆b x < D=970nm) and thus would lead to wrong interaction potentials V(∆b), whereas taking the three-dimensional ∆b ≥ D indicates the precision of our tracking method.The two particles appear to be most frequently separated by ∆b = D + 100 nm. Static and Dynamic interactions As mentioned in the introduction, one aim of the developed tracking method is to analyze dynamic interactions between several diffusing spheres.The temporal resolution is defined by the number of points necessary to reliable sample the detector responses.Currently 12.5 points per µm are needed to identify a peak from a 970 nm sphere, which results in a temporal resolution of 2R = 1.6 kHz for the current setup.Substituting the galvanometric mirrors by acousto-optic deflectors would allow to increase the tracking rate up to 2R = 8 kHz, considering a signal amplifier bandwidth of approximately f = 1 MHz and a scanning length of L = 10 µm.Another important aspect is the influence of the sweeping optical trap on the particle diffusion.Therefore we measured the kick-displacements ∆ kick in nm for various scan speeds of the laser trap.At a comparable laser power, the trap was moved in a circle, to enable kicking in only one direction.After some hundred passes of the trap the displacement due to laser trap kicking could be easily measured.The result is summarized in Table 1, where ∆ kick per kick decreases to about zero for trap speeds v trap > 30mm/s.Such speeds can be easily achieved by AODs for line traps with extensions of L = 10µm. As an application interesting to colloidal physics and nanoscopic particle transport we evaluated the cross correlation of the positions of three 970nm spheres in a scanning optical line trap.The cross-correlations shown in Fig. 9 reveal the hydrodynamic coupling between the spheres in a time window from 1/R = 1.25 ms to times limited by the AC-time of the trap in the corresponding direction.According to the different trap stiffness in the x, y and z direction, the interaction time can be measured over several 100 ms in the weak x-direction.In direction y and z, i.e. perpendicular to the scan direction, the correlation curves show a pronounced dip for neighbored particles (beads (12), beads (23)), but the anti-correlated motion is still visible for particles not being in direct contact to each other. This anti-correlated behavior, known from two-particle interactions [4], is not observed in scan direction and likely is a result of laser trap kicking due to a too slow scan speed [12].These inter-particle correlations will be discussed in more detail in a further paper, which is in preparation. Conclusions In this paper we have discussed and explained in detail the principles of how to track several particles by back-focal plane interferometry using an oscillating laser beam.The system is in particular interesting to optical trapping applications, since trapping and tracking is achieved with the same beam and therefore does not require complicated alignment.Interferometric position signals are so pronounced that on the one hand tracking works very well even at low laser powers and low optical forces, and, on the other hand our tracking precision is less limited by photon shot noise than with high speed video-tracking.Video-methods including holographic techniques, have physical intrinsic problems concerning axial The data reveal an anti-correlated motion due to hydrodynamic coupling between the particles in the y and z-direction, whereas the x-lateral direction shows a positively correlated motion.Cross talk between directly neighbored particles is pronounced compared to the outer most particles. position tracking, especially for more than one particle, since out of focus intensity distributions must be know, which vary strongly with the degree of spatial and temporal coherence, with the angular spectrum of the incident light and with the particle properties.With our technique, tracking in 3D is possible without pre-calibration and at rates of several 10 kHz using AODs together with a fast signal sampling, which is offered nowadays by many medium cost DAQ cards.Although in our approach particles have been tracked in scanning line tweezers, arbitrary scan curves different to a straight line can be programmed.Alike, the underlying optical potential can be modulated arbitrarily with an AOM or AOD, which does not affect our tracking precision or post-processing.The achieved tracking precision with the standard deviation of σ x = 7.5 nm can be further reduced by a more stable scanning device, the very good σ y = 1.1 nm in y-and σ z = 2.3 nm in axial z-direction are superior to any other comparable tracking technique.The linear detection range is sufficiently large for trapping applications and can be further increased, by taking differently shaped laser foci for nontrapping applications.Since we used an oscillating point trap, we do not have problems with interference of scattered light (optical binding) as in static line traps, e.g. as with holographic optical tweezers. We think that this study is helpful to a variety of optical tweezing labs, especially to those being aware of the great potential of dynamic particle interactions.Optical traps enable a diffusion and fluctuation driven interaction of binding partners with enhanced contact probability.The possibility to observe these motions in 3D, with nanometer precision and at high frame rates will open new doors in bio-technology and modern biology. Acknowledgment This work was supported by the Deutsche Forschungsgemeinschaft (DFG), grant number SP 1145.The authors thank Matthias Koch for reading the manuscript, as well as Dr. Christian Fleck and Fidel Córdoba Valdés for helpful discussions. Fig. 1 . Fig. 1. (Color online) Schematic of the trapping and tracking setup.A NIR-laser is modulated by an AOM and deflected in phase by two galvanometric scan mirrors (SM).The rotational motion is translated into a lateral displacement by the scan lenses (L1, L2) and the objective lens (OL).The back focal plane of the detection lens (DL) is imaged onto two quadrant photodiodes for axial (QPD2) and lateral (QPD1) position detection.The inset shows a magnification of the focal plane, where the oscillating laser focus probes the positions b1 and b2 of two particles.transmission function A(x tr = v x ⋅t) ~ exp(-x tr ²/σ²) with maximum intensity A(x tr =0)⋅ |E i | 2 in the center of the line trap, corresponding to about 70mW.The width of the Gauss function A(x tr ) is about 2σ = 4µm.The transmitted intensity A(±x tr,max )⋅|E i | 2 is nearly zero at the turning points ±x tr,max . and particle position b(t).As with the static trap, the position signal S′(b) = ∫ I(k x ,k y ,b)dk x dk y is obtained by integrating over the area A m of the m-th PIN diode (m = 1..4): Fig. 2 . Fig. 2. (Color online) Time series Sx′(t),and Sz′(t) of the x (green) and z (blue) detector responses of two trapped particles.The trap positions xtr(t) correspond to the black line.The axial sum signal is superimposed by the modulated laser intensity.Due to the back and forth motion of the trap, the positions (peaks) of each particle are horizontally flipped per scan.Right: z-raw data and "empty-scan" (red and black lines) and the resulting post processed zdata (blue) for two particles with z-positions, i.e. peak heights indicated by the arrows. Fig. 3 . Fig. 3. (Color online) Left: time series of two 970 nm particles (red and blue traces) in a line trap.The distance between the x positions reveal the bead diameter.Center: Three 1D histograms of a single particle reflect the trap stiffnesses.Right: 2D histograms show how the particles distribute in the trap, which is also sketched above in 3D. ) #104816 -$15.00USD Received 2 Dec 2008; revised 12 Jan 2009; accepted 12 Jan 2009; published 13 Jan 2009 (C) 2009 OSA V eff (b x ) and the three trap stiffnesses can be obtained from the measured position histogram proportional to p(b x , b y , b z ) = p 0 ⋅exp{-V(b x , b y , b z )/kT} as shown in Fig.3center.The vanishing AOM transmission A(x=-L/2) = 0 is the starting point for the integration of the effective force 〈F grad (b x ,t)〉 to the potential V eff (b x ).The stiffness in x-direction is about 50fN/µm, about 2.5pN/µm in the y-direction and around 0.3pN/µm in the axial z-direction.These numbers refer to a single particle, which was tracked over 40 seconds and explored the whole trapping volume.Experimental data of two particles diffusing in the line trap is shown in Fig.3left.The time series b x (t) ~ S x (b x ) of the two particles (red and blue traces) are separated at minimum by the bead diameter of D = 970 nm.The 2D histograms on the right show the distribution of both particles within the optically created potential.This additionally gives an idea of the diffusive volume the particles explore.The trajectories are also shown in 3D with the particles indicated as scaled spheres. Fig. 4 . Fig. 4. (Color online) Linear detection range of the detector response of a 970nm silica sphere.Fixed on the coverslip, the particle is moved downwards in axial direction (see arrows) while the laser sweeps across it in x-direction.The method is illustrated for three different axial coverslip positions (I, II, III).The resulting detector response has a linear range (blue line) and the slope of a line fit encodes the axial sensitivity gz(bz).A lateral movement of the coverslip results in the sensitivity gy(by).The contor plots for gy(by) and gz(bz) are shown and the linear range is indicated by a black line. 5 . At positions x B and x C the fixed probe was placed $15.00 USD Received 2 Dec 2008; revised 12 Jan 2009; accepted 12 Jan 2009; published 13 Jan 2009 (C) 2009 OSA Fig. 5 .Fig. 6 . Fig. 5. (Color online) Detector responses Sy, Sz at the positions A, B and C within the optical potential.Points B and C are 3µm away from the center point A. Fig. 7 . Fig. 7. (a).(Color online) The z-positions of two adjacent particles can be independently tracked.(b).Increase in the x-position reconstruction error as a function of the sphere overlap ∆x between two particles (see text for details). From the histogram of the reconstructed particle positions the optical potential, probed by a diffusing particle, can be derived via Boltzmann statistics.The position histogram H opt (b x ) ~ p opt (b x ) and the corresponding potential V eff (b x ) = -kT⋅ln(p opt (b x )) + V 0 in laser scan direction (Eq.(6)) are shown in Fig. 8(a).A harmonic function fits well to the potential with a depth of almost 8 kT explored by the particle.Distance histograms of two particles diffusing in the potential V eff (b) are shown in Fig. 8(b).Here the two histograms H(∆b) and H(∆b x ) for the 3D distance ∆b = |b 1 -b 2 | and the 1D distance ∆b x = |b x1 -b x2 | of the two particles (bin size 20 nm) are compared. Fig. 8 . Fig. 8. (a).(Color online) Histogram of x-positions of a single particle and optical potential derived from Boltzmann statistics.(b).If two particles diffuse in the optical potential left, bead distances ∆b = |b1-b2| vary.The histogram H(∆b) of the 3D bead separation is compared to the histogram H(∆bx) calculated from the distances ∆bx = |bx1-bx2| of lateral x-positions only.The bead diameter D is indicated by the red line (for details see text). Fig. 9 . Fig. 9. (Color online) Cross correlation data of three 970nm particles in the line optical tweezers.The data reveal an anti-correlated motion due to hydrodynamic coupling between the particles in the y and z-direction, whereas the x-lateral direction shows a positively correlated motion.Cross talk between directly neighbored particles is pronounced compared to the outer most particles. Table 1 : Kick-displacements ∆xkick in nm for various speeds vtrap of the passing trap in mm/s.
8,013.6
2009-01-19T00:00:00.000
[ "Physics", "Engineering" ]
Regeneration of paint sludge and reuse in cement concrete Paint Sludge (PS) is a hazardous waste. Inappropriate disposal of PS might be harmful to public health and the environment. Various size of Paint Sludge Solid Powder (PSSP) particles have been produced by automatic processing equipment via dewatering,crushing,screening removing Volatile Organic Compounds (VOCs), and etc. Meanwhile, the test results show that PSSP is not a hazardous waste. Both flexural and compressive strength are increased by adding PSSP of polyurethane to cement concrete at a level of below 10% of cement weight. However, the strength has a significant reduction at a level of above 15% of cement weight. The reason for the increase of strength is probably due to a slow coagulation and copolymerization of PSSP and cement. The reduction is likely due to the self-reunion of PSSP. Introduction The raw PS produced by automotive industry by using Sediment Scraper usually contains 75-90wt.% of water and 10wt.% of volatile organic compounds(VOCs).Therefore, removing water and VOCs from the raw PS is very important for recycling PS.However, it is difficult to reduce the water below 40wt.% of PS by mechanical dehydration, besides, the dewatered PS is easy to form a soft lump quickly.This also increases the difficulty of recycling.E. M. James designed a Vacuum Filtration System for dewatering PS [1].It could enhance dewatering in vacuum chamber by using negative pressure.Then, remove residual water by using hot pressurized air.R. Elangovan also designed PS recovery and reuse converting machine [2].First, the water and solvent in raw PS turn into steam (vapor and VOCs) by heating the PS.Then, the mixed steam was collected through cooling unit, collection unit, wet scrubber unit, and ect.This machine could recover 34% of reusable solvent, 48% of stabilized solid residue and 18% of water.Although these equipment can effectively dewater the PS, they couldn't dewater enough if there was no crushing device.Therefore, the reprocessed PS is difficult to reuse directly.Recently, a comprehensive processing and utilization equipment for PS was designed by LEBANG Co., Ltd in Tai'an, China [3].All processing units such as dewatering, drying, crushing, and etc. are integrated together.So it is easy to produce PSSP with 50-1000 mesh and less than 2% of water.Especially, the treating process will not cause environmental pollution because it has a treatment system for waste water, waste gases, and dedusting. Recycling and reusing the PS has been an important topic in recent years.The primary study focuses on construction and road pavement materials.Lightweight construction materials have been obtained using PS of acrylic resin as a partial replacement for sand in cement concrete [4].The results have demonstrated that these materials can be used in residential construction.Furthermore, as modifier replaced partial asphalt, satisfactory results have been achieved in road pavements [5,6]. In this study, we produced PSSP of polyurethane which was regenerated from the automatic equipment for recycling PS made by LEBANG Co., Ltd.The purpose is to add PSSP of polyurethane into the concrete product as a filler and investigate the effects of different additives and percentages of PSSP on flexural and compressive strength. Materials and Methods Materials for this study include cement, sand, water, and regenerated PSSP.The raw PS was provided from automotive industries of SINOTRUK Co., Ltd in Ji'nan, China.The cement is the commercially available type P.O32.5(GB175-2007).Sand was acquired from an unknown source. The experiments include two steps.First, the raw PS was processed into PSSP with 200 mesh size.Then, this PSSP was mixed with cement, water and sand in certain proportions.The standard size of molds was 40mm×40mm×160mm cubes.The preliminary assessment for all specimens was performed at 7 days of aging . Preparation of PSSP According to PS processing flow chart in Figure1, the raw PS of polyurethane (Figure2) was regenerated into PSSP with 200 mesh (Figure3).The chemical composition and leaching toxicity before and after treatment were shown in Table1. Preparation of the PSSP-concrete specimens According to the test of water absorption rate of PSSP and method of cement mortar test in ISO679:1989, the water was increased by one third of PSSP accordingly.This study investigated the effects of additives and PS content on mechanical properties of PSSP-concrete. Steam+VOCS Hot pressurized air PS powder Bubbles and interfaces may be created when PSSP is mixed with concrete because the PSSP is mainly composed of polymer resin and pigment.Therefore, it is necessary to modify it using additives such as surfactants (sodium polyphosphate), defoamers (DF460) at a level 2‰ and 2.5‰ of total water, respectively.The exact compositions with different additives are shown in Table2.Furthermore, the compositions with same additives, PSSP contents of 5,10,15,20wt.% of cement are shown in Table3. Mechanical Properties Flexural and compressive tests were performed by using Full-automatic cement pressure testing machine (DYH-300B ) whose precision is grade1 and maximum stroke is 100mm. Results of PSSP regeneration treatment According to the results in Table 1, the regenerated PSSP is no longer hazardous waste because leaching toxicity is far lower than the national standard values of hazardous waste [7].In addition, the water and VOCs level of PS are significantly decreased.The data indicates that water content from 31.5 to 1.7wt.%with the removal efficiency at 94.6 %. Effects of additives on strength of PSSPconcrete Figure 4 and Figure 5 show the results of flexural and compressive test for PSSP-concrete respectively.As can be seen, when PSSP accounts for 5% of cement weight, flexural and compressive strength of specimens are increased by 5.76%,12.46%with single surfactant, 3.36%, 12.69% with single defoamer, 1.4%, 16.8% with 1.4% 、 16.8% with both surfactant and defoamer.Unfortunately, both flexural and compressive strength of specimens declined with increasing PSSP to top 15 % of cement weight in either group.It is evident that surfactant and defoamer have similar effect when PSSP content at 5%.However, the defoamer had more significant effect when PSSP content is more than 10%.This may be because sodium polyphosphate not only Series Cement ( increases wettability but also forms complex compound with Ca 2+ , Mg 2+ in PSSP and cement as a bridge.Moreover, DF460 is mainly to eliminate bubbles.Usually, the PSSP contains small amount of residual flocculant, more bubbles come into mortar mixture with an increase of PSSP content.Apparently, the bubbles become the main factors.Using sodium polyphosphate and DF460 together had synergistic effect to increase strength of specimens. Effects of PSSP Content on Strength of PSSP-Concrete The strength tests have been performed to determine the effects of PSSP content using sodium polyphosphate and DF460 together.Figure 6 shows the result of strength test for PSSP-concrete.As seen in figures, when PSSP content below 10%, the flexural and compressive strength have been greatly improved.In here, the maximum compressive strength increased by 16.7% compared with the control sample.However, the strength began to decline when the PSSP reaches to 15%.Unfortunately , the strength is far lower than control sample when PSSP reaches to 20%.These regularities are probably due to small amount of residual isocyanate and hydroxyl in PSSP can cause copolymerization with the hydroxyl in cement.So the degree of cross-linking is improved.In the mean time, PSSP may alleviate hydration of cement and reduce stress between sand and cement as filler to increase strength.Of course, PSSP can be agglomerated when the dosage is too high, which could increase the interface between powders and cement paste to decrease strength. Conclusion In this study, we focused on the regeneration of PS and reuse in cement concrete.We produced nontoxic PSSP. Based on the results from PSSM-concrete, changes of mechanical property have been observed.PSSP with different particle size range can be obtained via high efficiency dehydration, crushing and screening by using comprehensive processing and utilization equipment for PS.More importantly, the PSSP is no longer hazardous waste. The flexural and compressive strength of specimens are improved by adding PSSP below 10% of cement content in concrete.The flexural and compressive strength of specimens are significantly declined by adding PSSP above 15% of cement content in concrete. Table 1 . Waste gas treatment system both surfactant and defoamer, respectively.In addition, when PSSP accounts for 10 % of cement weight, flexural strength is decreased by 8.6%, 2.16%, compressive strength is increased by 10.8% 、 14.8% with single surfactant and defoamer.As expected, flexural and compressive strength of specimens are increased by Changes in chemical composition of PS before and after treatment. Table 2 . Compositions with different additives. Table 3 . Compositions with the same additives , different percentages of PSSP.
1,926.4
2018-06-01T00:00:00.000
[ "Materials Science" ]
A State-of-the-Art Study on Energy Harvesting Systems: Models and Issues Background/objectives: Energy is highly essential for the life of living beings. As technologies are getting advanced, the consumption of energy increases continuously. Conventional sources available at earth are limited, and it will be going to drain day by day. Therefore, it is necessary to design models which are capable of harvesting energy using natural resources. The main objective of this paper is to summarise recent contributions in the area of energy harvesting (EH) and discuss their models with operating process, advantages, and limitations. Also these models are compared among themselves in terms of energy generation capacity. Methods/statistical analysis: Several models have been developed for EH by researchers all over the world. Here, an attempt is made to review the various models involved in EH to prevent the deficiency of energy. Findings: An EH technique is one of the most potential methods to encounter the energy deficiency problem. This study describes few models dedicated to EH and focuses on the major issues such as the necessity of highly efficient electronic circuits for capturing, accumulating, and storing even small electrical energy. Also the harvester circuit must stay in the active mode and be ready to perform energy capturing whenever harvestable energy becomes available. Nowadays, several sources (non-conservative) are used for EH such as warmth of human body. In future, the main focus is to enhance the efficiency of the energy harvester system. Improvements/application: Energy consumption is increasing day by day and its shortage is already predicted in the near future. Therefore, techniques for the generation of uninterrupted power provide for the incessant operation of any device to make life easier. Introduction A 5G network design is significantly encouraged by energy-efficient characteristics. Huge attention has been paid to the hikes in the cost of electricity for network operations besides its adverse effects on the environment. In a mobile network, 80% of the total power consumption is attributed to the base station. 1 At present, the available theory highlights the recent contributions in energy efficiency and other network performance indicators such as delay, bandwidth, etc., of the system. One of the potential solutions to combat the challenges/ problems related to energy inefficiency is energy harvesting (EH). In this process, the major focus is to produce energy from environmental-friendly sources such as sun (solar cell) and wind turbine. The main advantages of these sources are their capabilities to generate energy without CO 2 emissions, which prevents adverse effects on the environment. Solar solutions suit most of the countries in Asia and Africa due to the immense availability of sunlight at day time. On the other hand, wind turbine solutions are more appropriate for central European and Scandinavian countries due to cloudy and windy weather. For implementing EH, few challenges need to be taken care of by the network designer. In a systematic power supply resolution, a constant quantity of power is Keywords: Energy Harvesting, Thermo-generator, Classical Energy Harvesting Model, Generic Sensor Network Node, Energy Harvesting Using Super Capacitor available throughout the operation, whereas EH solutions are time-dependent and the availability of energy is a stochastic process, 2 for example, sunlight is not available at night. Therefore, the harvesting period is idle for a long time. Here various EH models are discussed with their operating processes, advantages, and limitations. Also, these models are compared among themselves in terms of their energy generation capacity. EH Using Thermo-generator A thermogenerator 3 is a device which generates an electrical charge from the heat of the human body (mostly from wrist). It is based on the see beck effect, 4 which is a phenomenon in which a temperature difference between two different electrical conductors or semiconductors produces a voltage between two substances. The generated voltage is directly proportional to the temperature difference between the two platforms known as cold junction and hot junction. The entry and exit of energy at the junction is totally based on two facts: 1) The available temperature gradient at junctions. 2) Absorption or dismissal due to Peltier effect 5 (which is defined as a temperature difference between two electrodes connected to a semiconducting material arising due to an applied voltage). This phenomenon is useful when one has to transfer heat from one medium to another on a small scale. In a thermoelectric module, 6 the generator consists of a thermocouple, which includes both types of semiconductors: p-type and n-type, having connection in series-wise electrically and parallel-wise thermally. Due to the see back effect, voltages arise, thus there is different temperature at the thermocouple ends. The connection of electrical circuit is made in such a manner that it allows adding the voltages obtained at each thermocouple and finally the total output voltage arises at the thermoelectric generator (TEG) end. This obtained output voltage is directly proportional to the number of thermocouples present and to the temperature gradient between the cold and the hot side. In 2007, Seiko Thermic reached a significant milestone. 7 They precisely used a TEG to convert heat from human wrist into electric energy. It was the first watch to operate by taking the power by the temperature gradient between the body and environment. TEG produces a power of at least 1.5 µW when the temperature difference was in between the range from −3°C to 1°C (here the temperature range indicated the heat absorbing range of watch from the wrist). A demo model of TEG is shown in Figure 1, which converts body heat into some amount of energy (nearly 1.5 V) to operate a watch and some specific embedded medical devices used to measure or monitor blood pressure, heart rate, etc. With a temperature difference of 5°C and a surface area of 0.5 cm 2 , TEG generates approximately 40 µW of power at 3 V. This generated electricity is stored in a thin-film lithium battery. A compact TEG model 8 was established whose output is in equilibrium with the macroelectronic system load. The working limit of a TEG is at 273 K which is the room temperature, but its limits do not extend beyond 120°C, due to which it provides the output power of 20 µW. Design Considerations The human body is a great source of energy which may act as the temperature difference between the bodies and the environment, which is accepted by the TEG to get electrical energy. In 1994, Stunner 9 concluded that the efficiency between temperature ranges of 20-35 o C is 5.5%. In a warmer region, the efficiency drops compared to a colder region due to a rise in heat energy. The placement of device in the body part is very crucial. It is suggested that the neck area would be the best to place this device as it is one of the warmest body parts and also easily reachable. It was observed that the generated power using TEG lay within the range of 0.2-0.32 W. The Rahul Yadav, Ayush Goel, Shruti Vashist and Mohit Verma material used for fabricating this device also plays a vital role. Aaltenkirch 10 suggested that a good thermoelectric material should have large see beck coefficients and large electrical conductivity with low thermal conductivity. These parameters are easily related with the figure of merit, given by where α is see beck coefficient; σ is electrical conductivity; λ is thermal conductivity; and Z is figure of merit. The design of a TEG also requires proper dimensions. Again, the maximum obtained driving power by TEG is inversely proportional to the area of cross-section (A) and directly proportional to the length of p-n thermoelectric leg couple. This leg length is also inversely proportional with power density (D). Again, specific power (W/cm 2 ) will increase if the height of thermoelectric elements reduces, while maintaining the same aspect ratio of their legs. In 2006, 11 it was theoretically concluded that specific power can be produced at room temperature if the value of figure of merit is equal to 0.9 at 10 K. The efficiency achieved by a thermoelectric microconverter is approximately only 5-6%. Ryan et al. depicted that conversion efficiency can be improved by manipulating the electrical and thermal transports on nanoscale; this will also enhance the figure of merit by a factor of 2.5-3% near room temperature. Power Management Unit It is necessary to design the power module in such a way that it converts the input voltage lying within few hundreds of mV into a higher output voltage (V). It is also necessary to have at least 0.8 V for starting to regulate a conventional step-up converter. Again, the required specified energy transfer could not be performed by a single converter module. Therefore, it is highly necessary to design an optimum power management system for uninterrupted operations. The designed power management contained a charge pump in conjunction 12 with a step-up DC to DC converter. 13 This charge pump is responsible for delivering the necessary start-up voltage to the switching circuitry of the step-up converter, which undertakes energy conversion once activated. The power consumption of a thermogenerator is too high; therefore it is very difficult to implement it on a continuous RX path (receiving end path). Along with the lack of implementation of permanent receiver capabilities, the entire system will have a defect as the amount of energy harvested cannot be collected. Figure 2 represents a block diagram of a classic EH. This block consists mainly of five different blocks on which the model is working. Below the working of different blocks is described briefly. Working of Different Blocks Energy generator: The energy generator is fabricated using a piezoelectric fibre composite which generates the piezoelectric effect (it is defined as the ability of a certain material to generate an electric charge in response to an applied mechanical stress). Detector: The function of the detector is to detect, calculate, and estimate the amount of power received into the EH system in order to drive any desired application. The detector also has an in-built sensor, which alerts the system about the power available or power consumed by the load. EH module/energy storage: The function of EH module/ energy storage is to properly manage the power received from the system. It also manages power distribution from the system. Switch: The switch allows and restricts the passage of harvested energy to the load. Load: The load is finally the power consumer part of the system. There are various sources such as microprocessor sensors, wireless sensor network transceivers, etc., using the harvested energy as their energy source. Working Procedure As soon as the energy is generated from the transducer, the detector electronics receives the injected energy from the energy generating source. This electrical energy is now accumulated, collected, and stored in the internal storage unit such as a battery, capacitor, and supercapacitor. Energy is captured by operating it between two supply Once the maximum voltage is received, the system terminates further charging and sets the output on "on demand requirement" to power the load. As the output power goes down and reaches its minimum, the charging cycle begins again and attains the maximum value. This process is continuously running by taking nearly 10 mA current as input. The same process charge/cycle times repeat every 4 minutes at an average input current of 10 mA. To obtain optimum performance and long energy retention time, EH electronics need to be so designed such that it consumes energy much smaller than the energy input by the generating source, which leads the design to incorporate micropower devices. Electric Generic Sensor Node 14 An electric generic sensor node harvests electric energy from a mechanical source. In this proposed technique, mechanical stress is used for EH. The system uses a piezoelectric transducer, such as a Teminc piezo atomiser, microporous atomiser or metal mesh atomiser, etc., to harvest energy. It has been proposed to be implemented mainly on an overbridge or flyover. The transducer system is physically placed over the surface of the bridge/ flyover. Whenever a vehicle passes through the bridge or flyover, the pressure applied to the transducer converts this mechanical energy into electrical energy. These transducers are connected to the sensor node which accumulates all the energy generated by the transducer and sends to the powerhouse. 15 These fabricated systems are sensible/capable enough to detect a minor human stepping and convert it into power. The block diagram of an electric generic sensor node is shown in Figure 3. The different parts of the proposed model are 16 : Sensing plates: These are inscribed over the surface of a flyover to detect the movement of mechanical pressure on the overbridge. The system is designed in such a way that whenever there is a passage of vehicle over the sensing plates, the system will generate electricity from the load applied to the plates. Counter: The main function of a counter is to count the amount of movements over the bridge and send it to the transducer. Transducer: The main function of a transducer is to convert one form of energy into another. In energy generic sensor node, a piezoelectric traducer is used. As implied by the name, the transducer is based on piezoelectric effect and converts mechanical energy into electric energy. Battery: The battery is used for the storage of harvested energy. EH Using Supercapacitor EH based on a supercapacitor uses three sources of power at the same time. These sources are solar panel, battery and supercapacitor. These power sources switch among themselves as per the requirement of the system. A key component of the system is a microcontroller, which is used to collect the data and take a decision regarding switching the source according to the requirement. The transducer is used to monitor the output sources. If solar energy is not available due to climatic conditions, the reserve source, i.e. battery, will be utilised. At first, the transducer observes weather conditions on the basis of the intensity of sunlight, then it sends the observed data to the microcontroller to take a decision on switching. 17 Figure 4 represents the block diagram of the system. The different components of the system are described below: Sensor: The sensor used in the supercapacitor is temperature sensor. It is a linear-type semiconductor 13 Lm35 or Lm32, which can produce an output of magnitude in the range of mV. This produced output voltage is proportional to the temperature. The working temperature range of these transducers is +/−1/4°C at room temperature and 3/4°C in overall range of −55°C to +150°C without any external calibration or trimming. Solar panel: The solar panels are composed of monocrystalline silicon. These solar panels are arranged in the wafer form having a thickness in-between 160 and 240 mm. These panels are small and provide better output, which helps to reduce the need for MPPT (maximum power point tracking) technique. The maximum charging current available from solar panels is 300 mA. However, it is not possible to get a continuous output supply of 300 mA. The capacitors are charged by current varying within the range of 0-300 mA. Another very important factor of charging is time. Solar EH has a limited time window during which the capacitor needs to be charged. The charging time of the capacitor depends upon the time constant and the maximum current available from the power source. These limitations necessitate the use of supercapacitors over normal capacitors. Battery: A Li-ion battery of 3100 mAh at 3.8 V is used for this system. Such a battery was selected due to the large number of recharge cycles, high charge density, low leakage, lack of memory effect, and sufficient voltage with one battery. Wireless transceiver: A tanangf4 wireless transceiver was used to implement this EH model. The operating frequency of the transceiver is 2.4 GHz which lies within the ISM band. The working principle of this wireless transceiver is the same as zigbee, which is specially built to control the sensor network of IEEE 802.15.4 standard in wireless personal area networks. This transceiver needs 3.3-3.6 V supply voltage and 45 mA of current during transmitting and 50 mA during receiving. Microcontroller: The microcontroller is a key component of this model, which performs various functions such as monitoring the output of the system, collecting and accumulating the harvested energy and the task of decision making for the selection of the particular energy source according to the situation. The microcontroller used to implement this model was PIC18F4520. The entire system works with the help of solar power, two supercapacitors (each of 25 F), and the battery unit. 18 When the solar panel powers up the circuit, the battery and supercapacitor gets charged. The two capacitors are connected in series-wise which raises the voltage up to 5.2 V across them. The minimum required operating voltage of the system is 3.3 V. This model was mainly designed to use less battery source, but the nodes need battery power to start the operation. At first, the system reads the data received from sensors and collects the voltage from sources. The microcontroller then disconnects the battery and switches the solar panel mode if the voltage of the capacitor (V cap ) and solar panel (V sol ) is at their maximum value. The system will switch to the supercapacitor if V cap is more than operating voltage, i.e. 3.3 V. If the intensity of sunlight is good, this model is able to charge the supercapacitor and power the node simultaneously. Once the supercapacitor is fully charged, the processor will wait until the solar panel output falls below the threshold and then switches to the supercapacitor. Conclusion Various models have been developed to harvest energy and store it. This study described few models dedicated to EH and focused on the major issues such as the necessity of highly efficient electronic circuit for capturing, accumulating and storing even small electrical energy. Also, the harvester circuit must stay in the active mode and be ready to perform energy capturing whenever harvestable energy becomes available. Table 1 compares the various models described in this study. In the future, these models will be embedded into a mobile handset to increase the potential of mobile battery. In the thermogenerator model, the warmth of human body is used to produce energy, whereas a piezoelectric transducer has been used in a classical energy model. However, the supercapacitor model employs three sources of power, i.e., battery, solar panel and supercapacitor, and source switching is carried out according to the situation. The microcontroller monitoring the output of the system collects and accumulates the harvested energy. The task of decision making for the selection of the particular energy source is based on a dynamic situation. In the energy generic sensor node, mechanical stress is converted into power with the help of a pressure transducer.
4,372.2
2019-11-20T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Climate finance and women-hunger alleviation in the global south: Is the Sub-Saharan Africa case any different? To unearth the influence of climate finance (CF) on women-hunger alleviation in Sub-Saharan Africa (SSA), the study used unbalanced panel data for 43 SSA countries for the period 2006–2018. Data was analysed using system-GMM to deal with the endogeneity problem inherent in the model, among other panel regression estimators. Also, the sensitivity of the estimates was carried out using panel fixed effect quantile regression. The findings showed that CF and its components have a significant effect on women-hunger alleviation in SSA, apart from FDI. Further, control of corruption also showed a significant women-hunger alleviation impact. For the climate variables, areas in SSA with higher temperature are more likely to experience worsened women-hunger. Based on the findings, the study recommends that SSA countries need to strengthen their fight against corruption. More so, donors should extend CF as financial aid or support to government budget, due to their potential of alleviating women-hunger. Introduction Women empowerment in the agricultural value chain, access to resources and trade has the potential of speeding up the realisation of Sustainable Development Goal-2 (SDG-2)-end hunger-and the Malabo commitments.The Malabo declaration imbibes all African countries to end hunger by 2025.This agenda is realisable only if the problem of gender inequality in food production is ameliorated [1].The gender equality dimensions of the Malabo declaration, focuses on women owning 30 percent of documented land and accessing 50 percent of finance to end hunger.In that same vein, SDG-2 postulates that all forms of hunger should be ended by 2030, including addressing the nutritional needs of pregnant and lactating women and adolescent girls worldwide (Goal 2 | Department of Economic and Social Affairs (un.org)).It is estimated that 2 billion people worldwide-representing one in every four persons-have no access to enough reliable nutritious and safe food.In 2020, 690 million people were estimated to be chronically hungry, a figure expected to increase by 130 million due to COVID-19 [1,2].[3] opined that women and girls account for 60% of all those facing chronic hunger worldwide.Meanwhile, agriculture is the mainstay of over 1.4 billion women worldwide living in rural areas.In the global South, approximately 43% of the labour force are women, with 66% of them being livestock keepers [2,4].Yet, women have very little access to and control over land and other productive resources, market and education as compared to men.For instance, women form only 13% of agriculture land-owners globally [5].The gender situation seems more interesting in Africa, where women form on average 60% of the agricultural labour force and play a very significant role in hunger alleviation by growing up to 89% of what the family consumes [6,7].A problem that is gradually creeping into the agricultural sector in the global south is that men are drawn away from farming and increasing the role of women in growing food for family income diversification.According to [8], this is known as the 'feminisation of agriculture'.Linked to the 'feminisation of agriculture' is the problem of 'feminization of poverty'.'Feminisation of poverty' reinforces the idea that the majority of the world's poor and hungry ones are females [4,7]. Interestingly, for all women in the reproduction age, one-third of them suffer from anaemia that is attributable mainly to food deficiency (Goal 2 | Department of Economic and Social Affairs (un.org)).The Agriculture Development Bank in 2013 pointed out that 55% of global gains achieved in reducing hunger is directly attributable to progress made in women's education and levels of equality [9].It will cost the world $23,620 per person and global-equivalent to an aggregate of $160.2 trillion if the world fails to address the cost of gender inequality [10]. The number of hungry ones continues to show an upward trend worldwide.For instance, the number of food insecure people in Latin America has tripled; doubled for Western and Central Africa; and a 90% increase in Southern Africa [1].The United Nations posits that a quarter of the world's hungry ones-denoting 220 million people-reside in sub-Saharan Africa (SSA), where a looming danger of humanitarian food crisis is imminent [4].It is interesting to note that SSA women represent more than a fifth of the global South's agricultural workforce-denoting 500 million women and are often subsistence farmers that grow crops for consumption and not to sell [4].These statistics show that women's labour productivity is essential in the food sector to end hunger, especially at a time when climate change is adversely impacting crop production.However, the economic contribution of women in ending hunger has been overlooked in SSA.For that matter, it is vital to deal with hunger, food security, and rural poverty in a holistic manner.This must be done by integrating the mitigation and adaptation of climate change, biodiversity, and gender equality.This is an approach relevant to rural women empowerment in SSA. The exacerbating effect of climate change on hunger cannot be overemphasized in Africa-a region where only four percent of farms are irrigated [11,12].For instance, rain-dependent crops like peanuts and millet are experiencing reduced productivity attributable to sporadic rainfall.In other to circumvent the problem of crop failure, women in Africa are beginning to grow drought-resistant rice among other cereals.If SSA governments and the international community commit to helping women grow climate-resilient food such as millet and peanut, it will help deal greatly with hunger among the vulnerable group (i.e.women and children).Millet, for instance, is rich in protein and calcium, important nutrients relevant for growing children and pregnant or lactating mothers.Peanuts also contain very high protein relevant for muscle and brain development [13].This indicates that, with a clear focus by SSA on mitigating and adapting to climate change in the agriculture sector, women-hunger will be reduced to the barest minimum. Climate change is worsening hunger among women, men, and children in SSA; through the vagaries of the weather, extreme temperature, sporadic rainfall, and an upsurge in the intensity and frequency of climate-induced disasters such as droughts, floods, cyclones and storms.It is estimated that globally, climate-induced disasters cost billions of dollars as a result of economic damage and the destruction of lives and livelihoods of billions [14][15][16][17].If no climate action is taken, the amount needed to address climate change impact in the future will grow exponentially.As of now, the projected cost globally is $69 trillion by 2100 if the 2˚C threshold is crossed [18].However, the Intergovernmental Panel on Climate Change (IPCC) [18] mentioned that the tipping point of 1˚c has already been crossed since 2017.Detailed research however projects that women are expected to be highly affected by climate change than men because they are the most vulnerable and marginalised group.Especially, women in SSA need to contend with limited access to finance, weak land tenure system and persistent social inequality hampering their agricultural capacity and productivity. As a result of the foregoing, Article 4 of the 1992 UNFCCC posits that developed countries should provide 'new and additional' financial flows (climate finance) to help developing countries mitigate and adapt to climate change-termed climate finance [16,[19][20][21][22][23][24][25].Climate finance has been mentioned by the IPCC and UNFCCC, as a very important tool in addressing the gender inequality problem.If the climate finance architecture is designed to deal with gender issues; it will promote inclusive, equitable, and just climate actions to alleviate hunger and achieve food security.The good news is that climate finance donors have increased their gender targets in climate actions.This is evidenced in the increase in climate finance targeting gender by 55% in 2014 from 2010 [16].Yet in 2014, gender accounted for only 31% of total climate finance flows-representing $8 billion-from major donors [16]. In the hunger literature, several studies have looked at climate change and hunger or food security among women in SSA [1,7,13,[26][27][28][29][30].A few studies tried looking at climate finance and hunger [31,32], and others tried explaining the need to channel enough climate finance to deal with gender equality [9].To the best of our knowledge, no study has empirically tested the influence of climate finance on women-hunger in SSA.This study contributes to the hunger literature in two folds.Firstly, by determining the influence of climate finance in alleviating hunger among the most vulnerable gender in SSA, by dealing with the problem of 'feminisation of poverty' which is closely linked to hunger in the sub-region.Secondly, to find out whether adaptation or mitigation finance better help in women-hunger alleviation in the region most susceptible to the exacerbating effect of climate change. Literature review In gender literature, feminist theorists are the major theories that try to understand the nature of gender inequalities in society [33].They believe that before the 1970s most scientific studies focused on male-only samples, and the results generalized to cover women and children [34].However, the effect of the environment, accessibility of finance and land acquisition on poverty and hunger differ between men and women.Especially in most developing countries where women make sure to serve their husbands and children before they attend to themselves.More so, the environment worsens women poverty and hunger through health-related problems.Women are exposed to environmental-related health hazards such as cancer, asthma, lead poisoning, reproductive disorders, and other types of cancers [34].Also, the livelihood of most women is dependent on climate-sensitive sectors such as subsistence agriculture, forestry and water [13].In addition, women have less capacity and resources compared to men in mitigating and adapting to climate change. Some prior studies indicate that women are 14 times more likely to perish in climate-related disasters than men [35,36].An instance was the 1991 cyclone and flood in Bangladesh, where 90% of the victims were females.Some reasons given included the fact that early warning signals were not sent to women who were predominantly caregivers at home; most of them lacked swimming skills; some of the women tried escaping the floods holding infants and towing elderly family members, whereas husbands escaped alone [34].This is a common situation in the global south like SSA too, although not the case in the global north or developed countries.This affirms the existence of the cultural feminist theory in the global south.The cultural feminist theory asserts that women and men experience the social world differently.This is largely due to the differences in values associated with womanhood and femininity in culture [33].Explaining the reason why women hunger is a major problem in the global south, especially SSA. The existence of feminism in agriculture, poverty, and hunger is further affirmed by studies such as [37].They found that women farmers have lower rates of agricultural productivity than men farmers.This study was carried out among five countries in SSA-Ethiopia, Malawi, Rwanda, Uganda, and United Republic of Tanzania.The study sort to show that gender gaps exist even in agricultural productivity.These gaps arise not because women are less efficient farmers but because they experience inequitable access to agricultural inputs; including family labour, high-yielding crops, pesticides and fertilizers.To close the gender gap, efforts must be made to equalise women's access to agricultural inputs such as time-saving equipment [37,38].These affirm the existence of the structural oppression theory of feminism by Freidrich Engels and Karl Marx [33].The proponents argue that the working class and powerless (like women) are exploited due to capitalism.The capitalist which mostly involves men oppresses the women due to their lack of power.To an extent, funds geared toward poverty alleviation and hunger extended to women are taken by their husbands, leaving them poor.Women do so to ensure peace and stability in the family.Based on that, it is expected that climate funds extended to women will not necessarily help alleviate poverty but in most cases funds are usurped by their husbands. It is established in the hunger literature that, if women (constituting 43% of the agricultural labour force in developing countries) have equal access to finance as men, food production could rise by up to 4%, potentially reducing the number of undernourished people by 12-17% [7,39,40].Sadly, in some African rural communities, women are not allowed to have their own bank accounts, negotiate with suppliers or use other financial services.Other genderbased constraints include comparatively diminished access to technology, services, modern inputs, and markets.Across Africa, women also tend to have smaller plots and less control over labour and land.Some prior studies argue that, until the world pays greater attention to gender relationship in fund allocation and loan reimbursement, even funds targeting women may land in the hands of men [16].For instance, Goetz and Sen Gupta [41] found that men use female household members in securing loans, due to women's higher loan repayment rates.When women fail to obtain loans desired by male relatives, it breeds tension at home [42]. The single most important determinant of food security in the hunger literature is gender equality, which is capable of contributing significantly to a country's growth [43,44].An example is a study by Smith and Haddad [45].The study found that 43% of the hunger reduction which occurred between 1970-1995 among developing countries, is attributable to the progress made in women's education.A value equivalent to the combined effect on hunger reduction of increased food availability (26%) and improvements in the health environment (19%) for that same period [45].In that same study, an additional 12% of hunger reduction was attributable to an increase in the life expectancy of women.In short, a combined total of 55% of hunger reduction among developing countries during the period, were attributable to the improvement of women's situation within society [45].The global hunger index was used to compare hunger among countries globally.The findings revealed a significant correlation between hunger and gender inequalities [46].Implying that, countries with very high hunger index are those with severe gender inequalities [43,[47][48][49][50][51][52][53]55]. A tool that can be effective in dealing with gender inequalities in the global south is Climate finance (CF).The reason is that designing CF to promptly respond to gender-related issues tends to enhance equitable and inclusive climate action to enhance sustainable development.As a result, OECD countries and other CF donors are increasingly targeting the gender component of any climate action by a country [54].The OECD [54] report intimates that total global aid targeting gender and climate change sour upward by 55% in 2014 from 2010.The report further indicated that gender accounted for 31% -equivalent to USD 8 billion-of CF provided by major donors that are members of the Development Assistance Committee (DAC) [14,15,17].This study contributes to both hunger and feminism literature, by finding out whether CF so far received by SSA countries is helping realise women-hunger alleviation.For that matter, climate finance is expected to reduce women-hunger as their haemoglobin levels rise.The null hypothesis to be tested in this study is: H o : Climate finance reduces women-hunger in SSA 3 Methodology Data To better estimate the impact of climate finance on women-hunger alleviation in SSA, data for all SSA countries must be used.However, some SSA countries have very large missing data points for the study period; these include Cape Verde, Equatorial Guinea, Sao-Tome and Principe, Seychelles, and Somalia.This leaves us with an unbalanced panel data of 43 countries in SSA for the period 2006-2018 used for the estimation.The estimation was carried out using System Generalised Method of Moment (SYS-GMM), Pooled Ordinary Least Squares (POLS) and panel fixed effect (FE) models.A robustness check was carried out using panel fixed effect quantile regression. Dependent variable.The main dependent variable of the study is women-hunger (a variable which is very cumbersome to measure in practice, since hunger is felt at the individual level).For that matter, we employed a variable used by FAOSTAT in determining food utilisation by women under food security-the "prevalence of anaemia among women of reproductive age" (age ranges from 15 to 49 years).Although the variable was collated from FAOSTAT, its main source is from the World Health Organisation (WHO) Global Health Observatory data repository.The variable was computed by looking at the percentage of women in the age bracket of 15-49 years, which have haemoglobin levels less than 120 g/L for non-pregnant and lactating women and less than 110 g/L for pregnant women.Low haemoglobin and iron deficiency is a serious consequence of hunger among women of reproductive age and children.Women-hunger is highly prevalent among three of the regional blocs: Economic Community of West Africa State (ECOWAS), Economic Community of Central Africa State (CEMAC) and Southern Africa Development Community (SADC), with all having values above the overall mean of 41 percent (refer to Table 2).Only the East Africa Community (EAC) has a women-hunger score below the mean (refer to Table 1). Independent variables.The main independent variable of the study is climate finance (CF) and other financing variables such as aid, FDI, and government expenditure.In this study, CF is the 'new and additional' financial flows from developed countries to help developing countries mitigate and adapt to climate change in million USD.CF data was sourced from the Organisation of Economic Co-operation and Development (OECD), Donor Assistant Commitment (DAC) climate-related development finance.From Table 1, it is clear that EAC received the largest sum of climate finance (i.e.323 million per year on average), followed by the ECOWAS region.Sadly, the CEMAC region is the least recipient of climate finance in SSA.Further, climate finance data is segregated into adaptation and mitigation.For mitigation, SADC region is the second largest recipient after EAC (refer to Table 1).Both mitigation and adaptation finance extended to SSA show very high variability across the region, as indicated by a very high standard deviation (146378.9 for adaptation finance and 141426.4for mitigation finance) (refer to Table 2).The reason may be due to differences in the vulnerability levels of all countries in the sub-region.If climate finance is helping achieve women-hunger alleviation contrary to the dictates of the structural oppression theory of feminism, then we expect a significant negative effect on women-hunger.To ensure that climate finance extended to SSA is not siphoned or becomes fungible, we include a corruption control variable (COC) which is sourced from Notre-Dame Global Adaptation Index (ND-GAIN).From Table 2, it can be seen that corruption control in SSA is very low (Average of 0.27), although the fight is better in the SADC region (With an average of 0.4) there is more room for improvement for all countries in SSA. Other financing variables-such as aid, FDI and government expenditure are included in the study to cover for the broader definition of climate finance as stipulated by the Paris Agreement.The broader definition explains climate finance as ". ..financial resources provided to assist developing countries concerning both mitigation and adaptation."(UNFCCC, 2015: pg.13).In light of that, the Paris Agreement enjoins developed countries to support developing countries to mitigate and adapt to climate change, and at the same time called for, and encouraged the broader approach in understanding climate finance.This includes climate finance from the private sector via foreign direct investment (FDI); governments nationally determined contributions to climate change through government expenditure (Gov't-Spending), and other forms of aid that helps mitigate and adapt to climate change.Data on FDI and aid are sourced from the World Development Indicators (WDI) and Government Spending from FAOSTAT.Our a-prior sign for all the climate finance variables is expected to be negative-to indicate a hunger reduction effect.Majority of FDI flows to the SADC region compared to anywhere else on the continent (averaging over 1.2 billion annually).CEMAC region continues to be the least recipient of FDI among the other climate finance variables (refer to Table 1).In this study, aid consists of per capita Overseas Development Assistance to each country in constant USD.From Table 2, an average of USD 70 per person is extended to SSA.ECOWAS has been the highest recipient of aid per capita with CEMAC region being the least recipient throughout the study period (refer to Table 1).Since the focus of this study is on women-hunger, Government spending looks at government's annual expenditure on agriculture as a percentage of total expenditure.All governments in SSA spend averagely only 5 percent of their total expenditure in agricultural development (refer to Table 2).This value is far lower than that needed to achieve the Malabo declaration 2025.EAC is the area that spends the highest percentage of their total expenditure on agriculture (i.e. 7 percent) yet is still woefully inadequate. Next, to establish the effect of climate change on women-hunger, temperature variable is included in the model.This variable is chosen instead of precipitation and rainfall due to the dominant climate of SSA.Secondly, including both rainfall and temperature in the model will create some noise.Temperature measures mean annual temperature for each country in centigrade, sourced from World Bank Climate Change Knowledge Portal (WBCCKP).ECOWAS is the most heated place in the sub-region. To be sure of whether an increase in economic growth and agricultural productivity influences women-hunger in the face of climate change, we included GDP per capita, food production (FP), import of food (FI), agricultural land use (AGL) and irrigation.Apart from AGL which is sourced from WDI, the rest of the variables are collated from FAOSTAT.GDP per capita is a measure of welfare or poverty level and computed as GDP divided by the population of a country in constant 2010 USD.CEMAC region has proven to have the highest GDP per capita annual average of 8154-which is a value almost double the region's average of 4,711.Food production in this study is the per capita food production variability variable from FAO-STAT, which is computed as the "food net per capita production value in constant 2004-2006 international USD".The variable is included to find out to what extent does unstable food supply influences women-hunger in SSA.SADC region has very high food production variability, a problem that may stem from the poor rainfall pattern in the region.However, the ECOWAS region has seen a more stable FS which may be attributable to a more stable temperature in the area.Trade data in this study is the value of food imports in total merchandise exports, computed as a percentage of food imports over total merchandise export by FAOSTAT.From Table 1, it is indicative that EAC spends averagely 88 percent of its total merchandise export on importing food.This explains why much climate finance is extended to the area, due to high food insecurity which calls for high food import in the sub-region.Agricultural land use is computed as agricultural land expressed as a percentage of total land area, sourced from WDI. SSA has used an average of 47 percent of its land for agricultural purposes, with EAC using an average of 59 percent.Finally, irrigation is computed as the percentage of arable land equipped for irrigation and sourced from FAOSTAT.It is a measure that looks at the vulnerability of the agricultural sector to climatic shocks including water stress.Only 5 percent of SSA's land is equipped for irrigation, yet EAC members used 13 percent of their arable land for irrigation, the highest in SSA.The least is by the CEMAC region which uses 0.7 percent of arable land for irrigation. Model specification and estimation technique In the hunger literature, hunger is modelled as a production function-either as a translog production function or a Cobb-Douglas production function.This is due to its correlation with food production [32,55,56].Following the dynamic model proposed by [38], that modelled hunger of country (i) at a specific time (t) as an unobserved latent variable (y it ); in this study, variable (y it ) represents women-hunger in each country for a particular year. The X it vector constitutes macroeconomic variables like climate finance variables (climate finance, aid, FDI, Gov't-Spending), climate-related variables (Rainfall and temperature) and agricultural productivity variables (Food production, food import, agricultural land use and irrigation), and Ɛ it stands for the error term with zero mean, constant variance and normally distributed.Following Rodgers baseline model specified in Eq 1, two main equations were written; Eqs (2) and (3) to estimate the impact of climate finance on women-hunger in SSA.Eq (2) looks at climate finance on women-hunger, the CF variable is based on the narrow definition of climate finance.In model (3), the study employed the broad climate finance definition-by including FDI, aid and Gov't spending in the model.This was to find out whether they will influence women-hunger in SSA. Eqs 2 and 3 are estimated using both static and dynamic panel regression models to determine the influence of climate finance on women-hunger alleviation in SSA.For static panel data analysis, fixed effect (FE) and Pooled Ordinary Least Squares (POLS) panel data analysis.FE models are very important on the theoretical basis since they take care of the time-invariant heterogeneity across countries, and also provide robust results to omitted variable biasedness [57].The relationship between climate finance and women-hunger is expected to be bidirectional.In the sense that much-gendered climate finance will be extended to countries with a higher risk of women-hunger, or countries with a greater number of hungry women are more likely to attract enough climate finance.This relationship is more likely to generate an endogeneity problem but can be dealt with by using dynamic panel regression such as system generalised method of moment (SYS-GMM).Further, dynamic panel regression model (Specifically SYS-GMM) was employed in this study due to large number of cross sections (43 countries), with a relatively shorter period (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018).Finally, the inclusion of a lagged dependent variable in Eqs 2 and 3 is also a cause of endogeneity, which can be corrected using dynamic panel regression models. To capture the endogeneity and biasedness inherent in the models, the GMM estimator by Arellano and Bover [58] and Blundell and Bond [59] was employed.Here, values of the lagdependent variables are implored as instruments to cater for the endogeneity problem.In extant literature, both differenced GMM and SYS-GMM have received a lot of attention.However, differenced GMM has been found to be less efficient in the face of small sample size with persistent time series [60,61], as in the case of this study.SYS-GMM outperforms difference GMM when the time series follows a random walk process, and the instruments in the level estimation are efficient predictors of the endogenous variables [59,61].SYS-GMM combines the standard set of moment conditions in first-difference with their lagged levels as instruments, and an additional set of moment conditions derived from the equation in levels [58,59,62].A two-step SYS-GMM estimator is asymptotically more efficient compared to a one-step estimator; based on a sub-optimal weighting matrix [61].Further, SYS-GMM is employed because [63] argued that it uses the orthoganality condition of the lag dependence variable, and first difference to the error term which suffers from potential small sample bias in fixed time periods; in a situation where the dependent variable shows high degree of persistence.SYS-GMM algorithm uses additional moment conditions, which is generated by combining the first difference of the lagged dependent variable and the sum of the cross-sectional fixed effect and the contemporaneous error term [63].The moment conditions are strictly exogenous, in other to deal with the endogeneity problems inherent in most finance and economics data [63]. Next, the study sort to test the overall validity of the instruments implemented in GMM using Sargan test.Further, a test of whether there is no serial correlation between the error term and lagged instruments used are enough to explain the model estimated was conducted.This was done using the Arellano-Bond's first and second-order tests of autocorrelation.In other to ensure the absence of multicollinearity in the model, the study carried out a cross-correlation analysis (refer to Table 3).Apart from aid and trade which showed a strong significant positive correlation, the rest of the variables did not show a very high correlation among themselves. Findings The results of both static and dynamic estimations are presented in Table 4.The main methodology employed for our analysis was SYS-GMM, and the results were triangulated with POLS and FE estimators.The robustness of our results was checked with panel quantile regression models.To ensure the validity of the instruments used in the absence of serial auto-correlation of residuals in SYS-GMM, the study performed the Sargan test of overidentifying restrictions with the Arellano-Bond test for serial correlation.The finding from Table 4 indicates that all null hypotheses were rejected.This implies that the instruments are appropriate and shows the absence of serial correlation for the residuals in second differences. The main objective stated at the outset of this study was to assess the influence of climate finance on women-hunger in SSA.The findings of the study as presented in Table 4 points out that, climate finance significantly reduces women-hunger at 1 percent level of significance for all three models.Indicating that, a percentage increase in climate finance significantly reduces the number of hungry women in SSA by at least 1.5 percent.This shows that climate finance has an elastic impact on women-hunger reduction in SSA (Fig 1 refers); indicating that world policies targeting women-hunger and poverty will yield better results when churned through climate funds.It is no surprise because CF targeting gender has increased by 55% since 2014, hence a positive outcome.This outcome however contradicts the structural oppression theory of feminism by Freidrich Engels and Karl Marx [33].The researchers argue that if women receive funds, their male counterparts will oppress them and take the money yielding no positive influence on them. Next, we look at the other constituents of climate funds from the broader definition (i.e.Aid, FDI and Gov't-spending).Aid showed a significant women-hunger alleviation effect at 1 percent level of significance (source: SYS-GMM estimates from Table 4).A unit increase in Aid reduces hunger by at least 0.01 as depicted in Fig 1 .This finding is opposed to the dictates of the structural oppression theory and the findings of [32].This points out that climate funds extended by way of financial aid and targeted at women, yields hunger alleviation among women.Thirdly, Gov't-spending also showed a woman-hunger reduction effect at 5 percent level of significance.To show that government's allocated contributions geared at mitigating and adapting to climate change can play a significant role in women-hunger alleviation.Sadly, FDI indicated a women-hunger worsening effect at 1 percent level of significance.The results indicate that a USD 1 million increase in FDI worsens hunger by 1.4 percent (refer to Fig 1 and Table 4).The FDI results show that private sector investment in CF may have little influence on women-hunger alleviation.This finding affirms the pollution haven hypothesis.A weak women-hunger alleviation effect by FDI is expected, because of the growing problem of agricultural feminisation among developing countries.Even though FDI breeds more formal employment, women are still forced into the informal sector such as agriculture due to childbirth and urbanisation.This choice of employment also helps them make time in taking care of the home.For effective and efficient usage of CF in developing countries where corruption abounds, there is the need to control for corruption in the analysis.Interestingly, COC indicated a significant women-hunger alleviation effect at 1 percent.Proving that, as SSA exerts more effort in fighting corruption in their CF usage, this will translate into stronger women-hunger reduction. The study also looked at how climate change is influencing women-hunger in SSA.Temperature showed a significant women-hunger worsening effect for all models and estimators.This shows that hotter regions in SSA are more likely to suffer higher rates of women-hunger, as compared to those in colder temperates.This explains why ECOWAS region has the highest average women-hunger rate in SSA.Population growth indicated a significant women-hunger worsening effect at 1 percent level of significance.This finding affirms the Malthusian theory.The theory argues that the population growth rate will outnumber the food production rate causing a rise in hunger. Other variables included in our model are agricultural or food production-related variables.These include food production, food import, agricultural land use and irrigation.Among the agricultural or food production variables, agricultural land use and irrigation showed significant women-hunger alleviation.By contrast, food import showed women-hunger worsening effect in SSA.Indicating that, climate funds invested in reclaiming agricultural land and improving irrigation will see improvement in women-hunger alleviation.However, climate funds or development aid extended via food import to developing countries will worsen women-hunger. Climate finance is further segregated into mitigation and adaptation finance, and analysed accordingly.Adaptation finance is funding geared toward adjusting to the current and future unexpected harmful impact of climate change.On the other hand, mitigation finance are funds extended to countries to reduce greenhouse gas emission, to prevent or minimise the exacerbating effect of climate change.The OECD [54] report asserted that the mainstreaming of gender issues is more rampant in adaptation comparative to mitigation activities.Furthermore, the mainstreaming of gender has been largely uneven in climate-sensitive sectors.This section seeks to find out which financing mode (i.e.adaptation or mitigation financing) significantly impacts women-hunger alleviation in SSA.The findings indicate that both mitigation and adaptation finance significantly improve women-hunger alleviation (refer to Table 5).Yet, adaptation finance is more efficient than mitigation finance in alleviating women-hunger.This affirms the OECD [54] report, and also shows that much climate finance is extended to areas already experiencing the exacerbating effect of climate change. Sensitivity analysis As proposed by [64] and [65], the study further tested the robustness of the static and dynamic panel regression results using fixed effect panel quantile regression estimates to control for distributional heterogeneity.The traditional regression estimator focuses on the mean effect, which may cause over or under-estimation of the relevant coefficients or even lose the ability to detect important relationships [66,67].Quantile regression uses a generalization of median regression analysis to other quantiles.Following the study by [67], we specify the fixed effect panel quantile regression model as: A major setback affecting the panel fixed effect quantile regression estimator is the inclusion of a considerable number of fixed effects (α i ), subject to the incidental parameters problem [67][68][69].To circumvent this problem, [65] and [67] proposed the treatment of unobservable fixed effects as parameters to be jointly estimated with the covariate effects for different quantiles.What this method seeks to do is to introduce a penalty term in the minimization to address the computational problem of estimating a mass of parameters.In Table 6, we present the 25 th , 50 th , 75 th and 90 th quantiles of the conditional women-hunger distribution. Fixed effect panel quantile regression estimates presented in Table 6 are consistent with the GMM, fixed effect and POLS estimates.For instance, almost all the climate finance variables-climate finance, mitigating and adaptation finance (i.e.aid, FDI, COC and Gov't-spending)showed homogenous effect on women-hunger alleviation from the 25 th to the 90 th percentile, although non-significance for most variables.In addition, the climate change variable (i.e.temperature) also exhibit homogenous effect on women-hunger alleviation for all quantiles.For the agricultural or food production variables, they all showed homogenous effect on womenhunger for all percentiles apart from AGL which showed a heterogenous effect on womenhunger.In all, the panel fixed effect quantile regression result strongly reinforces the main regression results of the study. Conclusion This study was set on the road of finding out how CF influences women-hunger alleviation, using an unbalanced panel of 43 SSA countries for the period 2006-2018.Data collated was analysed using SYS-GMM and complimented with panel fixed effect and POLS estimations.A major setback to these traditional estimation techniques is that they focus on the mean effect, which is likely to cause over or under-estimation of the relevant coefficients.In response to this, panel fixed effect quantile regression was carried out as sensitivity analysis to ensure that median regression analysis is extended to other quantiles.The findings showed that CF (narrow definition of CF) and broad definition variables of CF (aid and Gov't-spending) have a positive potential of alleviating women-hunger in SSA.This is contrary to the expectations of the structural oppression theory of feminism.However, private financing of CF via FDI have a weaker potential of alleviating women-hunger in SSA.In addition, both mitigation and adaptation financing have a very good potential in alleviating women-hunger.Further, SSA countries strengthening their quest to fight against corruption are more likely to experience women-hunger alleviation.For the climate variables, warmer areas in SSA have a higher potential to experience worsened conditions of women-hunger. It can be seen from the findings that the gender requirement included in CF extended to developing countries is seeing a positive women-hunger alleviation.Based on the findings, the study recommends that CF should be extended as financial aid or supporting government budget, due to its potential in alleviating women-hunger.Secondly, SSA countries should strengthen their quest in controlling corruption in other to ensure the appropriate use of CF to achieve women-hunger alleviation.Thirdly, much CF should be extended to warmer areas of SSA due to the exacerbating effect of temperature on women-hunger.A major limitation of our study was how the women-hunger variable is estimated.Thus, future research can look at other dimensions of both women-hunger and climate finance to triangulate the literature.
8,402.8
2024-02-05T00:00:00.000
[ "Economics", "Environmental Science" ]
A nitrogen-vacancy spin based molecular structure microscope using multiplexed projection reconstruction Methods and techniques to measure and image beyond the state-of-the-art have always been influential in propelling basic science and technology. Because current technologies are venturing into nanoscopic and molecular-scale fabrication, atomic-scale measurement techniques are inevitable. One such emerging sensing method uses the spins associated with nitrogen-vacancy (NV) defects in diamond. The uniqueness of this NV sensor is its atomic size and ability to perform precision sensing under ambient conditions conveniently using light and microwaves (MW). These advantages have unique applications in nanoscale sensing and imaging of magnetic fields from nuclear spins in single biomolecules. During the last few years, several encouraging results have emerged towards the realization of an NV spin-based molecular structure microscope. Here, we present a projection-reconstruction method that retrieves the three-dimensional structure of a single molecule from the nuclear spin noise signatures. We validate this method using numerical simulations and reconstruct the structure of a molecular phantom β-cyclodextrin, revealing the characteristic toroidal shape. Scientific RepoRts | 5:14130 | DOi: 10.1038/srep14130 present in biomolecules possess nuclear spins. Mapping spin densities with molecular-scale resolution would aid in unraveling the structure of an isolated biomolecule 21,22 . This application motivates to develop an NV spin-based molecular structure microscope [23][24][25][26] . The microscope would have immense use in studying structural details of heterogeneous single molecules and complexes when other structural biology tools are prohibitively difficult to use. For example, NV spin-based molecular structure microscope would find profound implications in the structure elucidation of intrinsically disordered structures, such as the prion class of proteins (PrP). This family of proteins is known to play a central role in many neurodegenerative diseases 27 . Understanding the structure-function relationship of this protein family will be crucial in developing drugs to prevent and cure these maladies. The NV spin is a high dynamic range precision sensor 28,29 ; its bandwidth is limited only by the coherence time and MW driving-fields 30,31 . The broadband sensitivity has an additional advantage because multiplexed signals can be sensed [32][33][34] . Fully exploiting this advantage, we present a projection-reconstruction method pertaining to an NV spin microscope that encodes the spin information of a single molecule and retrieves its three-dimensional structure. We analyze this method using numerical simulations on a phantom molecule β -cyclodextrin. The results show distinct structural features that clearly indicate the applicability of this technique to image isolated biomolecules with chemical specificity. The parameters chosen for the analysis are experimentally viable 4,[35][36][37] , and the method is realizable using state-of-the-art NV sensing systems 23,24,35 . We also outline some possible improvements in the microscope scheme to make the spin imaging more efficient and versatile. At equilibrium, an ensemble of nuclear spins following the Boltzmann distribution tends to have a tiny fraction of spins down in excess of spins up. The expression for this population difference is given by the Boltzmann equation: where N is the number of spins, Δ E is the energy level difference, k is the Boltzmann constant, T is the temperature, h is the Planck constant, γ is the gyromagnetic ratio, and B is the magnetic field. For room temperature and low-field conditions (Δ E ≪ k), an approximation is made in equation (1). The spins reorient their states (↑ -↓ , ↓ -↑ ) with a characteristic time constant while conserving this excess population. The average value of this excess spins remains constant while the root mean square (r.m.s.) value of this fluctuation over any period is given by σ ∝ √N . This statistical fluctuation in the net magnetization signal arising from uncompensated spins is called spin noise, and it is relevant when dealing with small numbers of spins. Combining spatial encoding using the magnetic field gradients and passively acquiring spin noise signals, Müller and Jerschow demonstrated nuclear specific imaging without using RF radiation 38 . The spin noise signals become prominent when we consider fewer than 10 6 spins. Considering solid-state organic samples with spin densities of 5 × 10 22 spins/cm 3 , this quantity of spins creates a nanoscale voxel 36 . Spin noise signals arising from a few thousand nuclear spins were sensed using MRFM and used to reveal a 3D assembly of a virus 4 . More recently, using NV defects close to the surface of a diamond, several groups were able to detect the nuclear spin noise from molecules placed on the surface under ambient conditions 19,20,25 . If a magnetic field sensor is ultra-sensitive and especially sample of interest is in nanoscale 13 , it is convenient to use spin noise based imaging. The advantage being, this sensing method does not require polarization or driving nuclear spins. Dynamic decoupling sequences, such as (XY8) n=16 , are used for sensing spin noise using NV. The method uses pulse timings to remove all asynchronous interactions and selectively tune only to the desired nuclear spin Larmor frequency 17,19,25 . The NV coherence signal is recorded by varying the interpulse timings over a desired range. The acquired signal is deconvoluted with the corresponding filter function of the pulse sequence to retrieve the power spectral density of noise or the spin noise spectrum 39 . This approach has been proved to be sensitive down to a single nuclear spin in the vicinity 20 as well as to a few thousand of nuclear spins at distances exceeding 5 nm 19 . Correlation spectroscopy approaches are able to achieve sub kHz line widths 32 even for shallow NV spins 40 . The spin sensing results clearly demonstrate the potential of NV spin sensor as a prominent choice for realizing a molecular structure microscope that is operable under ambient conditions. Spin imaging method Several different schemes are currently being considered for nanoscale magnetic resonance imaging (MRI) using NV-sensors: scanning the probe 12,15,41 , scanning the sample 23,24 and scanning the gradient 12,35 . The first two rely on varying the relative distance and orientation of the sensor to the sample, thereby sensing/imaging the near-field magnetic interaction between the NV spin and the spins from the sample. This sensing could be performed either by passively monitoring the spin fluctuation 19 or by driving sub-selected nuclear spins by RF irradiation 18 . This approach is particularly suitable when dilute spins need to be imaged in a sample or for samples that are sizable. These methods read spin signal voxel-by-voxel in a raster scanning method, so they are relatively slow but have the advantage of not needing elaborate reconstruction 23,24 . For the method presented here to image a single molecule, we employ scanning the gradient scheme 12 Here, we present a three-dimensional imaging method that is especially suiting for nanoscale-MRI using NV spins. The multiplexed spin-microscopy method uses a projection-reconstruction technique to retrieve the structure of a biomolecule. Figure 1a shows a schematic of the setup that is similar to those utilized in nanoscale magnetometry schemes 12,35,42 . We place the sample of interest (a biomolecule) on the diamond surface very close to a shallowly created NV defect 43 . Achieving this could be perceived as a difficult task, but recent advancements in dip pen nanolithography (DPN) 44 and micro-contact printing (μ CP) 45 for biomolecules have been able to deposit molecules with nanometer precision. Another approach is to cover the surface with monolayer of molecules, or to use sub-monolayer concentrations but ensuring that a single biomolecule is able to be located within few nm from the NV sensor. The encoding stage proceeds in the following manner: a shaped magnetic tip is positioned such that we subject the sample to a magnetic field gradient on molecular scales (2-5 G/nm) 4,35 . In this condition, nuclear spins present in the biomolecule precess at their Larmor frequencies depending on their apparent positions along the gradient (Fig. 1b). The spin noise signal of the precessing protons from the sample is recorded using an NV center in close proximity 19 . Because of the presence of a field gradient on the sample, fine-features appear in the noise spectrum. These unique spectral signatures correspond to nuclear spin signal contributions from various isomagnetic field slices 38 (Fig. 1c,d). Directions of field gradient applied to the molecule are changed by moving the magnetic tip to several locations. This gradient gives distinct projection perspectives while the spin noise spectrum contains the corresponding nuclear spin distribution information. These signals are indexed using the coordinates of the magnetic tip with respect to the NV defect as Θ and Φ (measured using high-resolution ODMR) and are stored in a 3D array of S(Θ, Φ, ω) values. We require encoding only in one hemisphere because of the linear dependence (thus redundancy) of the opposite gradient directions. For a simple treatment, we assume the gradients to have small curvatures when the magnetic tip to NV sensor distances are approximately 100 nm (i.e. a far field). This assumption is along the lines of published works and is valid for imaging biomolecules of approximately 5 nm in size at one time 4,35 . The effects of non-axial field from the gradient source influencing the spin properties of NV could be minimized by applying a static field (B 0 ) of appropriate strength that is well aligned with the NV axis. The reconstruction procedure is as follows: by knowing the complete magnetic field distribution from the tip 42 , we can calculate the gradient orientation (θ, ϕ) at the sample location for any tip position (Θ, Φ) and rescale the spectral information to spatial information in 1D: ω = γr∇ B. In this way, the encoded data set matrix dimensions are transformed into θ, ϕ and r. Here, B rms (x, y, z) is the magnetic field fluctuation caused by the number of nuclear spins N(x, y, z) contained in the respective isomagnetic field slices. As nuclear spin Larmor frequencies within every slice are identical, they all contribute to same frequency component in noise spectrum 38,46 . Therefore, the 1D signal we recorded is an integrated effect from the spins in the respective planes (refer to equation (2)). The next procedure follows along the lines of a filtered back-projection principle to remove high-frequency noise and projection artifacts. Here, we should ensure appropriate quadratic filters because the signal is a plane integral but not a line integral as in X-ray computed tomography (CT) 46 . The rescaled signal s(r, θ, ϕ) is filtered using a quadratic cutoff in the frequency domain. We perform the reconstruction in the following way; an image array is created with n 3 dummy elements in three dimensions I(x, y, z). Any desired index r i , θ j , ϕ k from the signal array is chosen, and the corresponding value s(r i , θ j , ϕ k ) is copied to the image array at location index z = r i . The values are normalized to n and replicated to every cell in the xy-plane at the location z = r i . The elements of the xy-plane are rotated to the values − θ j , − ϕ k following the transformation given by affine matrices shown in equation (3). The values are then cumulatively added to the dummy elements and stored in the transformed index. This procedure is repeated for every element in the signal array so that the corresponding transformed array accumulates values from all of the encoded projections. The transformed array with units in nm contains raw projection-reconstructed images. This array is then rescaled to account for the point spread function of NV spin and single proton interaction 23 . The resulting 3D matrix carries nuclear spin density in every element and contains three-dimensional image of the molecule. Results To evaluate this technique, we considered a simple molecule of β -cyclodextrin as a molecular phantom. This molecule has a toroidal structure with an outer diameter of 1.5 nm and an inner void of 0.6 nm (Fig. 2a). We specifically selected this molecule so that we could easily visualize the extent of the structural details that can be revealed by reconstruction. We used the crystallographic data of β -cyclodextrin from the Protein Data Bank and considered the coordinate location of all the hydrogen atoms for the numerical simulations performed using MATLAB. Some relevant information about the molecular spin system and parameters used for the simulations is listed in Table 1. We virtually position the molecule in the proximity of an NV defect that is placed 5 nm beneath the surface of the diamond. At these close distances, the spin noise from hydrogen atoms is sensed by the NV spin 19 . We compute the fluctuating magnetic field amplitude (r.m.s.) produced by proton spins at the location of the NV spin by using the expression given by Rugar et al. 23 . The β -cyclodextrin molecule, when placed in the vicinity, produces a field of about 94 nT (r.m.s.), matching reported values 19 . We subject the molecule to the magnetic field gradients of 3 G/nm, and this produces spread in Larmor frequencies of 30.6 kHz for the hydrogen spins in the examined volume ( 3 times molecule size). As explained before, we compute the B rms field produced by hydrogen spins in each isofield slice set by the spectral resolution ~1.3 kHz. The 1 H spins in respective slices precess at their characteristic Larmor frequencies, so the noise spectrum reveals spectral features containing information about spin density distribution along the gradient direction. We show the computed spin noise spectra (B rms vs. frequency shifts) from the β -cyclodextrin molecule representing two different gradient orientations in Fig. 2b. 3D structure acquisition time~ 33 minutes Table 1. Relevant parameters for imaging a molecular phantom β-cyclodextrin using NV spin based molecular structure microscope by projection-reconstruction method. For three-dimensional encoding and reconstruction, we considered 9 × 9 unique gradient orientations equispaced along different θ, ϕ angles. The spectral data are computed for every projection, converted to spatial units and stored in a 3D matrix. This signal matrix is shown in Fig. 2c as slices in the r, θ dimensions. We apply the reconstruction algorithm as explained above to get the spatial distribution of hydrogen atoms. The structure of the reconstructed molecule clearly reveals its characteristic toroidal shape (Fig. 2d). The quality of the image reconstruction depends on the number of distinct tip locations (or gradient orientations) used for encoding 38 . The simulations presented in Fig. 2 display the reconstruction quality achieved for a toroidal molecule, β -cyclodextrin, for a set of 81 measurements used for encoding and decoding. If we consider the signal acquisition time of 22 seconds reported for a single point spectral measurement 23 and calculate the time needed for achieving desired spectral resolution (~1.3 kHz), it results in long averaging times. We note that the signal acquisition time dramatically reduces to ~1 second/point by using double-quantum magnetometry 47 together with enhanced fluorescence collection 48 . In this case the complete image acquisition time becomes approximately 33 minutes for the data set used here to reconstruct the molecular structure of an isolated β -cyclodextrin. Discussion The β -cyclodextrin molecule we have considered for simulations is a simple molecule, but it has a characteristic toroidal structure and is easy to visualize. The results clearly showed molecular-scale resolution and provided information about the structure. The structural details and achievable resolution depends on the following factors: The primary factor is the SNR obtainable when recording the noise spectra. Demonstrations using double quantum transitions (m s = − 1 ⇔ m s = + 1) achieved high-fidelity spin manipulation 49 and improved sensing 47 ; applying those techniques could improve signal quality. The coherence time of the NV spin would be a factor in achieving better SNR 23 , but not the most decisive one for noise spectroscopy, as shown using correlation spectroscopy technique to achieve sub-kHz linewidths even in a shallow NV spin 40 . Spectral reconstruction using compressed sensing approach is expected to considerably speed-up sensing 21,22,33,34 . In addition to this, improved fluorescence collection efficiency from single NV defects can be achieved using nanofabricated pillars 50 , solid immersion lenses 51 and patterned gratings 48 . The primary source of noise being the photon shot noise, boosting signal quality naturally increases the achievable resolution. Other techniques, such as dynamic nuclear polarization 52 , selective polarization transfer 17,19,20,22 , the quantum spin amplification mediated by a single external spin 53 , could give additional signal enhancements. Schemes employing ferromagnetic resonances to increase the range/sensitivity could provide other factors for resolution improvement 54,55 . The presented method is efficient for the reconstructing structural information and spins distribution at the nanoscale whenever it is possible to perform three-dimensional encoding. The encoding can be done either by using an external gradient source 12,35,56 or by the field gradient created by NV in its vicinity 20,57 . It is important to consider nuclear spins from water and other contaminants that form adherent monolayers on the surface of diamond 4,19,23,24 . The gradient encoding will register their spatial location appropriately. Upon reconstruction, this would result in a two-dimensional layer seemingly supporting the molecule of interest. This plane could come as a guide for visualization but can be removed by image processing if required. Homo-nuclear spin interactions cause line broadening and become a crucial factor when dealing with spins adsorbed on a surface. Unwanted spin interactions can be minimized by applying broadband, and robust decoupling methods such as phase modulated Lee-Goldburg (PMLG) sequences 58 . The factors determining the achievable structural resolution are the magnetic field gradient, the SNR of spin signals, number of distinct perspectives and the spectral linewidth of the sample. It is practical to retrieve structural details with atomic resolutions by applying larger gradients, using SNR enhancing schemes, improving fluorescence collection, and using decoupling. We have considered reported parameters and demonstrated that our method can achieve molecular-scale resolution. Although continued progress clearly indicates that attaining atomic resolutions is within reach 35,59 . However, for many practical applications, it is sufficient to obtain molecular-scale structures that contain information relevant to biological processes. The key feature of the NV-based molecular structure microscope is the ability to retrieve the three-dimensional structural details of single isolated biomolecules under ambient conditions without restrictions on the sample quality or quantity. These will be very much useful for studying hard-to-crystallize proteins and intrinsically disordered proteins. A molecular structure microscope that has the potential to image molecules like prion proteins would be pivotal in understanding the structure, folding intermediates and ligand interactions. These insights would undoubtedly pave ways to understand the molecular mechanisms of diseases pathways and develop efficient therapeutic strategies for their treatment and prevention 60 .
4,358.2
2015-05-12T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Experimental study on anti-collapse performance of beam-column assembly considering surrounding constraints Due to the uncertainty of the position of the failure column in the building structure, the surrounding constraints of the beam-column substructure are different after the local failure. Three-column and two-beam substructures with two different boundary conditions are taken as the research object (overall constrained and partially constrained), the failure mode, mechanical form and anti-collapse mechanism of the beam-column substructures are investigated by static loading tests. Tests show that the deformed forms of the two specimens are similar, the development process of the resistance mechanism follows the beam mechanism stage, the mixing stage of beam mechanism and catenary mechanism, and the catenary mechanism stage, however, boundary conditions have a significant influence on the development of catenary mechanism of beam-column substructure, the catenary mechanism of overall constraint specimens accounts for 30% of the total resistance, while the ultimate deformation capacity of partial constraint specimens is weak, and the catenary mechanism resistance only accounts for 15%. Introduction The continuous collapse of a building is a phenomenon in which a certain part of the structure is damaged by accident or severe local overload, and the overall damage is caused by transmission along the path, resulting in the loss of the bearing capacity of the main structure and then causing the overall collapse. This kind of collapse phenomenon, which is disproportionate to the initial destruction, seriously threatens the safety of human life, so it has aroused widespread concern in society and academia. At present, the design methods for continuous collapse resistance proposed by countries all over the world include conceptual design method, bond-strength design method, dismantled component method, and key component method [1][2]. Among them, the dismantled component method is to remove one or several components, and then analyze the collapse resistance of the remaining structure, which makes the collapse resistance research of the structure mainly focus on the beam-column structure with clearer force. Compared with the double half-span substructure, the two-span three-column substructure can fully consider the change in the position of the beam's reverse bending point during the failure of the substructure, and can more truly reflect the internal force change and resistance mechanism of beam column structure during the process of continuous collapse resistance after the failure of the middle column [3]. IOP Conf. Series: Earth and Environmental Science 643 (2021) 012163 IOP Publishing doi:10.1088/1755-1315/643/1/012163 2 Due to the limitations of the test conditions, the relevant regulations on frame structures at home and abroad are mainly based on experience or the results of finite element analysis. Alogla et al. [4] studied the influence of different reinforcement ratios on the collapse resistance of the structure through the collapse test of the beam-column structure considering the surrounding constraints. The static test results were converted into dynamic results through the energy conservation method. The analysis shows that the ductility and collapse resistance of reinforced concrete beam and column structures can be improved by using reasonable reinforcement ratio. Zhou Yun and Chen Taiping [5] analyzed the influence of side span constraints on the bearing capacity of the studied substructures against continuous collapse, and analyzed the bearing capacity of the structure during the removal of the bottom column of the frame. The research factors on the continuous collapse resistance of substructures at home and abroad mainly lie in the beam-column joint form, span ratio, reinforcement ratio, etc. [6][7][8][9] The influence of lateral constraints on the collapse resistance of beam and column structures is rarely considered, and the test of lateral constraints on the collapse resistance of steel frame structures is rarely reported. Current research results show that the continuous collapse resistance process of the steel frame is divided into three parts: beam mechanism stage, beam mechanism and catenary mechanism mixing stage, and catenary mechanism stage. The catenary mechanism stage is the last line of defense against continuous collapse of the structure, and the play of catenary is closely related to the form of peripheral constraints. Therefore, it is necessary to study the influence of surrounding constraints on the collapse resistance of the substructure. In this paper, two kinds of beam-column substructures with different boundary conditions, namely the overall constraint structure with the failure member located in the middle of the structure and the partial constraint substructures with the failure of the secondary column, are selected to carry out quasi-static tests under the failure of the middle column. The impact of the beam-column structure's collapse resistance, the failure form, force mechanism and resistance mechanism of its collapse process are deeply analyzed. Specimen design In this experiment, two beam-column structure specimens, which are overall constrained and partially constrained, were designed, and their numbers are WUFG and WUFG-S. The scale of the specimen model is 1:3. The beams and columns are all H-shaped cross-sections with dimensions of 150 mm× 100 mm× 6 mm× 9 mm, 150 mm× 150 mm× 8 mm× 10 mm, the left and right span beams are L=1500 mm, and the column length LC is 1100 mm. The geometric dimensions of the beam and column structure are shown in Figure 1. All of them adopt Q235B steel. The beam-to-column node at the center column of the substructure adopts a trapezoidal cover plate bolted connection (CPS) method. This node type has the characteristics of large rotation stiffness, strong deformation ability and superior collapse resistance, and the detailed diagram is shown in Figure 2. Loading device and system The test loading device is shown in Figure 3. The top of the failed column of the substructure is connected with a 100 T hydraulic servo actuator, and the vertical load is applied by static loading to simulate the load effect from the upper part when the structure collapses continuously. The loading process adopts displacement loading, each level applies at 5 mm, and the loading rate will not exceed 5 mm/min. After each level of loading is completed, holding the load for 3-5 min, and after the deformation of the substructure is stable, we proceed to the next level of loading until the specimen is damaged. Test phenomena and failure modes The load-displacement curves of WUFG and WUFG-S specimens obtained by the hydraulic servo actuator are shown in Figure 4, and there are two peak points correspondingly. The destruction phenomenon of each specimen is shown in Figure 5 and Figure 6. (1) When the loading displacement of the WUFG specimen reaches 195 mm, cracks appear in the base material near the weld between the lower flange of the west steel beam and the cover plate. When the load-displacement curve reaches point A1, the tensile flange at the weld of the cover plate of the west steel beam breaks, and the load drops suddenly, and the test phenomenon is shown in Figure 5. Internal force redistribution occurs in the beams on both sides, and the load appears to rise, and then reaches point A2, and the specific phenomenon is shown in Figure 5. (2) When the loading displacement of the WUFG-S specimen is increased to 209mm (corresponding to point B1 in the curve in Figure 5), the load is instantaneously reduced after the fracture of the lower flange of the steel beam on the west side near the failed column and the base material near the weld of the trapezoidal cover plate. When the load drops instantaneously from 255 kN to 238 kN, the phenomenon is shown in Figure 6. As the loading displacement at the center column gradually increases, the fracture of the lower flange is intensified, and the internal force redistributes in the beam until the load reaches a new peak again, when the loading displacement reaches 320 mm (corresponding to curve B2 in Figure 4), The test phenomenon is shown in Figure 6. The test results show that the fully constrained specimen first produced cracks at the lower flange of the steel beam on the west side of the failed column, and then the cracks developed to the web plate, and finally the web was completely broken. Due to the unbounded beam end column constraint of some constraint specimens, the base material of the lower flange close to the end of the cover plate near the failed column fractures first, and the cracks develop upwards. The test was ended with a complete fracture in the middle of the web plate. From the test results, the influence of peripheral constraints on the flexural bearing capacity of the beam-column structure is not obvious, but it has a significant effect on the transformation process of the structure's force mechanism, which greatly improves the ultimate bearing capacity of the substructure during the tension stage. The column displacement of the overall constrained substructure is larger than that of the partially constrained substructure. The ultimate deformation capacity of the partially constrained substructure is weaker. Deformation analysis of substructure The deformation development process of the two specimens is shown in Figure 7. It can be seen from the figure that the deformation trends of the two specimens are similar. At the initial stage of loading, the beam-column structure deforms due to bending (approximately a quadratic curve). As the loading displacement increases during the loading process, the beam-column structure shows large deformation characteristics. From the perspective of deformation, the chord rotation angle and node rotation capacity of the fully constrained substructure are both greater than that of the partially constrained substructure, which finally shows that the different lateral constraint stiffness caused by the boundary conditions will affect the rotation capacity of the substructure nodes. Analysis of anti-collapse mechanism of beam and column structure The development curve of the resistance mechanism of the two specimens with loading displacement is shown in Figure 8. It can be seen from the curve that at the initial stage of loading, the two specimens use the beam's flexural action to resist external loads, and the total resistance is mainly provided by the beam's flexural mechanism. As the loading displacement increases, the beam bending mechanism is almost stable. When the lower flange of the specimen is broken under tension, the resistance of the beam against bending gradually decreases. In this case, the resistance generated by the catenary mechanism suddenly increases. This process is the transition process from the beam's bending resistance mechanism to the beam catenary mechanism, that is, the transition stage. As the loading displacement increases, the catenary mechanism becomes more obvious, and eventually becomes the main resistance mechanism of the substructure. For the specimen WUFG, the beam mechanism resistance contribution ratio is about 70% during the entire loading process, and the catenary mechanism resistance force contribution ratio is about 30%. For the test piece WUFG-S, there is no side column tie at one end, and the remaining structure does not provide sufficient lateral restraint in the case of large deformation, and the axial deformation of the beam is released, resulting in the insufficient development of the catenary mechanism of the beam. Its catenary mechanism resistance contribution ratio is only 15%. flexural bearing capacity of the two specimen beams dropped suddenly, the beams on both sides developed coordinately, and the load reached a new peak after the internal force was redistributed. Therefore, in the beam mechanism stage, the strengthening of boundary conditions does not significantly increase the bearing capacity of the substructure. However, in the catenary mechanism stage, the bearing capacity of WUFG specimens has increased significantly, far exceeding that of WUFG-S specimens. The strengthening of the substructure boundary conditions can significantly improve the structural bearing capacity. (2) The deformation shapes of the two specimens are similar, but the difference in the boundary conditions of the substructures can significantly affect their ultimate deformation capacity. With the enhancement of peripheral constraints, its deformability and node rotation ability are correspondingly improved. (3) The resistance development process of the two-span three-column substructure of the composite beam has experienced three processes: the beam bending mechanism stage, the beam mechanism and the catenary mechanism mixing stage, and the catenary mechanism. However, the development of catenary effect is significantly influenced by different peripheral constraints of substructures. Among them, the catenary mechanism resistance contribution ratio of the test piece WUFG is about 30%. As for the test piece WUFG-S, the catenary effect has not been fully developed due to the lack of ties at one end and insufficient peripheral constraints, and the catenary mechanism resistance contribution ratio is only 15%.
2,874.2
2021-01-26T00:00:00.000
[ "Engineering", "Materials Science" ]
PAGER-CoV: a comprehensive collection of pathways, annotated gene-lists and gene signatures for coronavirus disease studies Abstract PAGER-CoV (http://discovery.informatics.uab.edu/PAGER-CoV/) is a new web-based database that can help biomedical researchers interpret coronavirus-related functional genomic study results in the context of curated knowledge of host viral infection, inflammatory response, organ damage, and tissue repair. The new database consists of 11 835 PAGs (Pathways, Annotated gene-lists, or Gene signatures) from 33 public data sources. Through the web user interface, users can search by a query gene or a query term and retrieve significantly matched PAGs with all the curated information. Users can navigate from a PAG of interest to other related PAGs through either shared PAG-to-PAG co-membership relationships or PAG-to-PAG regulatory relationships, totaling 19 996 993. Users can also retrieve enriched PAGs from an input list of COVID-19 functional study result genes, customize the search data sources, and export all results for subsequent offline data analysis. In a case study, we performed a gene set enrichment analysis (GSEA) of a COVID-19 RNA-seq data set from the Gene Expression Omnibus database. Compared with the results using the standard PAGER database, PAGER-CoV allows for more sensitive matching of known immune-related gene signatures. We expect PAGER-CoV to be invaluable for biomedical researchers to find molecular biology mechanisms and tailored therapeutics to treat COVID-19 patients. INTRODUCTION With COVID-19 becoming a pandemic, COVID-related biomedical research has generated a large amount of genomics and functional genomics data since January 2020 to characterize viral and host factors related to the disease outcome (1)(2)(3)(4)(5). As of 10 August 2020, the GEO database from the National Center for Biotechnological Informatics has reported 18 available COVID-19 genomic data sets in the GEO database (6) consisting of 73 samples using 'COVID-19' as the search term or 26 data sets consisting of 736 samples using 'SARS-CoV-2' as the search term (7). There is an urgent need to extract biological insights from SARS-CoV-2-related RNA-seq, single-cell RNA-seq and proteomic experimental results (2)(3)(4)(5). Our ability to identify SARS-CoV-2 related genes, RNAs, proteins, interactions, functional network modules and pathways will help design new and better diagnostic techniques, therapeutic targets, or vaccines to fight against COVID-19 (7)(8)(9). To perform functional genomics downstream analysis such as the Gene Set Enrichment Analysis (GSEA) (10), users today rely on general-purpose gene set databases, e.g. MSigDB (11), KEGG (12), EnrichR (13) or PAGER (14). However, while these databases generally contain 'immune response' pathways or gene signatures based on prior studies of cancer, autoimmune disorders, or other infectious diseases, they lack specific SARS-CoV-2 gene sets identified in recent SARS-CoV-2 genomic or functional genomic studies. For example, as of 1 August 2020, a quick search of 'COVID' or 'SARS-CoV-2' in MSigDB as of this publication returns no results and a search of 'SARS' or 'coronavirus' returns only one result. Likewise, a search using these queries against KEGG (12) retrieves only two COVID-19-related papers, while the same search against EnrichR returns no results. Increasing research has led to the development of several COVID-19 databases, e.g. D590 Nucleic Acids Research, 2021, Vol. 49, Database issue the COVID-19 Drug and Gene Set Library (15) and the Databases for the targeted COVID-19 therapeutics (16), both of which were published in August 2020. However, these databases selected content covering only an incomplete aspect of the COVID-19 biomedical research topics and not all prior knowledge of immune response gene signatures and pathways from related immunological research studies. They also do not include computational analysis tools to help users perform gene set enrichment analysis. Therefore, to identify novel gene signatures and biological pathways as genomic features in various tissues due to viral infection remains an ad hoc exploratory process (17,18). To provide the community with structured COVID-19 dedicated gene set data and a specialized GSEA search database, we developed PAGER-CoV (Pathways, Annotated gene-lists, and Gene signatures Electronic Repository for Corona Virus), accessible freely at http:// discovery.informatics.uab.edu/PAGER-CoV/. For the current release of PAGER-CoV as of this publication, we compiled a total of 11 835 PAGs (Pathways, Annotated genelists, and Gene signatures) from 33 data sources including (i) expert-curated SARS-CoV-2 related PAGs from recently published high-quality COVID-19 papers in LitCoVID (19), (ii) curated COVID-19 pathways related to candidate drug repositioning candidates from the PubChem database (20) and (iii) selected immune response PAGs imported from the PAGER 2.0 database (14). PAGER-CoV is designed as a web database that compiles comprehensively curated gene sets on coronavirus-related infection, inflammation, organ damage, and repair from literature and public databases. PAGER-CoV has an intuitive user interface, with which users can perform both basic browsings of COVID-19 related PAGs using either a medical term such as 'cytokine storm' or an official gene symbol such as 'ACE2'. Also, PAGER-CoV allows users to perform GSEAanalysis using a list of genes, e.g., those generated from a differentially-expressed gene list from a COVID-19 RNAseq experiment, to quickly retrieve top-scoring PAGs that relate closely to the input gene lists. By browsing through retrieved PAGs, users can examine (i) virus or human gene components of each PAG, (ii) each PAG's curated description, (iii) the source literature or database reference of each PAG, (iv) gene-gene interactions relationships among the genes covered by the PAG, (v) each PAG's pre-calculated quality score ('nCoCo Score') that measures the PAG quality using topological intra-gene-gene interaction while controlling for PAG size (14) and (vi) related PAGs based on shared membership (m-type) or regulatory (r-type) PAGto-PAG relationships described in (14,21). To accommodate the rapidly accumulating SARS-CoV-2 functional genomic data, we also designed a 'Content Contribution' page through which users can upload customized content for their incorporation into future releases. PAGER-CoV users can also download partial or full database content for advanced bioinformatics analysis elsewhere. For the rest of this paper, we will describe how the database content was constructed, how web users could interact with the database, and why PAGER-CoV represents an improvement over the general-purpose gene set database for characterizing coronavirus-related functional genomics data. Figure 1 demonstrates the PAGER-CoV database schema, which contains eleven entities (also called tables) and fourteen relationships. The primary design was adapted from our prior work on the PAGER 2.0 database (14). Briefly, (i) the PAG table contains the general information of the PAGs, including the PAGs' IDs, names, and data sources from which the PAGs are compiled, and PAG categories. As in (14). Each PAG belongs to either one of three categories: curated pathways/networks (P-type), curated gene sets without pathway/network (A-type), computationally derived gene sets with little or no curation (Gtype), such as differentially expressed gene from an RNAseq data. ( (22); while GENE2GENE REG replicates gene-gene regulations, which are validated invitro experiment, from the PAGER database (14). (v) The PAG2PAG R-TYPE and PAG2PAG M-TYPE tables contain two types of PAG-PAG relationships: regulatory and co-membership. As in (14) the PAG-PAG regulatory relationship reflects the PAG causal ordering inferred from gene-to-gene regulations; while the co-membership relationship reveals signaling cross-talk between PAGs that share signaling components within signal transduction pathways, in response to external stimuli. Data in the PAGER-CoV database is managed by the Oracle 19c relational database engine. Data collection overview We compiled data into the PAGER-CoV database based on two general strategies: expert curation from literature and automated database integration. The expert curation involves manual data extraction from COVID-19 literature following by quality control, which is different from our earlier high-throughput automated software-based curation method (14,21). Curation of P-type PAGs from PubChem To incorporate COVID-19 P-type PAGs, we performed web scraping for pathways relating to COVID-19 pathways on PubChem (https://pubchem.ncbi.nlm.nih.gov/#query= covid-19&tab=pathway). We wrote a Python 3 script on Anaconda distribution, which calls PubMed's Common Gateway Interface (CGI) (23) to download these PubChem COVID-19 pathways and their genes. The script directly made an API call to the PubMed website to get the most up-to-date gene expression of COVID-19 Pathways and refreshes on an automated batch schedule that maintains the data processing. Upon the downloaded pathway and gene information, the immunologist would curate, including revising the pathway description and removing COVID irrelevant genes, each pathway. Manual curation of A-Type PAGs Four A-type PAGs representing computationally-predicted repositioned drugs for COVID-19 were curated from (24). Five A-Type PAGs were manually curated from Mouse Genome Informatics Database (MGI), reflecting tissue or cell development markers. For these PAGs from MGI, the mouse gene IDs were converted to official human gene symbols before being added to PAGER-CoV. An A-Type PAG representing cytokine-storm-related genes were curated from a review article (25). An A-Type PAG was generated by processing raw single-cell sequencing data from https:// zenodo.org/record/3744141#.XuknTi2ZN24 and added to PAGER-CoV. Additionally, an A-Type PAG representing human exosome markers was curated from a review article (26). Literature curation of G-Type PAGs Following comprehensive SARS-CoV-2 literature review, manual curation of SARS-CoV-2/COVID-19 G-Type PAGs from emerging SARS-CoV-2 literature or data source was performed using the following methodology. First, mapping of SARS-CoV-2 protein to SARS-CoV-2 gene information was manually curated from the NCBI GenBank database using the SARS-CoV-2 sequencing information (NCBI Reference Sequence: NC 045512.2) isolated from patient zero at the Wuhan Seafood Market in Wuhan, CN (27). SARS-CoV-2 gene symbols were mapped to the viral protein product, e.g. 'ORF1ab polyprotein' mapped to the ORF1ab gene. G-Type PAGs manually curated from this study were given appropriate PAG Titles (e.g. 'Viral gene encoding SARS-CoV-2 Nsp1 viral protein' for SARS-CoV-2 protein nsp1), and annotated with additional information in the 'PAG Name' field. Mature peptide sequence information was matched to corresponding viral gene or open reading frame product information, alongside corresponding protein IDs. Annotation of the SARS-CoV-2 protein function, e.g. 'Geneset description' attribute, was taken from the COVID-19 subset of the UniProtKB database (28). A total of 33 PAGs (each containing a single viral gene member) were compiled in this manner, representing the relationship between viral proteins and the viral gene. Following this step, PAGs relating to in-vitro-validated SARS-CoV-2 viral protein to human host gene interactions were curated from a study where the authors cloned and D592 Nucleic Acids Research, 2021, Vol. 49, Database issue expressed SARS-CoV-2 viral proteins in-vitro and identified human host binding partners using affinity purification mass spectrometry (29). A total of 88 PAGs were curated from this study--71 PAGs representing the total viral-tohuman protein-protein binding partners identified, and 17 PAGs representing known druggable targets. In addition, 64 PAGs representing the significant cellular pathways disrupted during SARS-CoV-2 infection were curated from another proteomics study in which authors used human cell-culture lines to examine proteomic changes in SARS-CoV-2 infected human cell-lines over time (2). Next, we curated repositioned drug target gene-sets relating to clinical drugs under investigation to treat COVID-19. COVID-19 repositioned drugs, and their associated human protein drug targets and ADME proteins, were manually curated from the DrugBank database (30). Missing genes from the DrugBank database were manually searched for in literature and cited accordingly. PAGs with missing genes were excluded from import into PAGER-CoV. From this step, a total of 96 completed drug target/ADME-associated G-Type PAGs were added to PAGER-CoV. For the final step of manual curation, available raw sequencing data from newly emerging COVID-19 studies was searched on the NCBI GEO database with keyword search terms 'COVID-19' and 'SARS-CoV-2'. Available datasets were comprehensively evaluated by our curation team to identify high-quality COVID-19-specific G-type PAGs and were processed, analyzed, and curated into PAGER-CoV by our curation team. To compare host-related immune responses in patients between SARS-CoV-2 and other respiratory viruses, raw RNA-sequencing data available from clinical samples of non-SARS-CoV-2-related viral pneumonia were also re-analyzed, processed, and added to PAGER-CoV as two separate PAGs (31). Therefore, a total of ten G-type PAGs were collected this way. PAG data quality control To clean the data from the curated source, we created an automatic checking system to correct errors in curated data, assigning the internal PAG identifiers and insert into the PAGER-CoV database. We observed that the errors came from three aspects, the first type of failure coming from curation, such as duplicate genes in a PAG member list or invalid genes with no official gene names or Entrez IDs that needed to be fixed. The second type of error is invalid characters embedded in contents, such as u'\xa0' was replaced by space, u'\u2030 was replaced by '&quote' etc. The third type of error is the missing annotations in original data, such as a few pathways in PubChem, which had no taxonomy name. We pulled out these pathways, manually checked pathway description and information in original sources, add added back the species. To assign new identifiers to PAGs in sequence, we characterized the type of the PAGs using three-letter in the naming convention, retrieved the last number of existing type-specific PAGs in the database, and assembled the new identifier. Before inserting the records, our curator team validated and approved each PAG individually Additional PAG annotations The quality of PAGs is measured by a normalized statistically significant coverage of gene-gene functional correlations in gene-pairs or gene-triplets, named 'normalized Cohesion Coefficient score (nCoCo)' in PAGER 2.0 (14). The quality of PAGs is measured by a normalized statistically significant coverage of gene-to-gene functional correlations in gene-pairs or gene-triplets, named 'normalized Cohesion Coefficient score (nCoCo)' in PAGER 2.0. The brute force way of measuring the quality of PAGs is to report a total count of all the interactions for each PAG. However, it does not provide measurements against the background, and such count can vary dramatically when other non-quality factors change, e.g. increase of PAG size. Therefore, we introduce nCoCo score to address the following problems: 1. In nCoCo score, we measure not only the count of 'binary interactions' but also 'interaction triangles', the latter of which is a measure of the existence of network modules. 2. In nCoCo score, we convert the count of interactions and interaction triangles into a statistic against the count in the background distribution from randomly generated PAGs. Therefore, the reported statistic carries more statistical significance than a simple count. 3. In nCoCo score, we perform additional size normalizations (method described in PAGER 2.0) to make the density score of PAGs at varying sizes comparable by eliminating the score's size bias. Nucleic Acids Research, 2021, Vol. 49, Database issue D593 The gene prioritization within PAGs is based on gene weight calculated in the PAG, called 'relevant protein score (RP-score)' was described in PAGER 2.0 (14). To compute the nCoCo scores, first, we applied the HAPPI-2 database to recalculate the CoI and CoT scores using the hypergeometric cumulative distribution function (CDF). Second, we build the multi-box plots using the bins with log 2 -scale of PAG gene sizes and used the median to represent the value in each bin and applied the polynomial function to find the regression of the CoI score vs PAG size. where Sz(p) is the size of the PAG p, and the CoI(p) is the CoI score of the PAG P. Third, we calculated nCoCo score based on the formula: where med(P AG n ) is the median gene size of all PAGs. a and b are coefficients. Fourth, the nCoCo score is calculated by the sum of the normalized interactive score nCoI and normalized triangle score nCoT: To find an optimal nCoCo score cutoff, we created a negative set of PAGs by substituting gene members in 'true' PAGs with gene members randomly generated from the PAGER-CoV database. After calculating the nCoCo score of the negative PAGs, we chose the optimal nCoCo score cutoff that maximized the product of sensitivity (true positives over true cases) and specificity (the true negatives over negative cases). PAGER-CoV database web user interface The web user interface implemented the following essential functionalities for biomedical researchers and bioinformaticians: (i) Basic Search. On the main home page, users can search the database using a medical term or a gene symbol and retrieve a list of PAGs. The retrieved PAGs can be refined, explored on the web, or downloaded onto the user's computer for further analysis. (ii) Downstream analysis. On the 'Analyze' page, users can perform GSEA with an input gene list. Users can customize the statistical parameters according to the user's specific experimental requirements. (iii) Contribute content. On the 'Contribute' page, a user can upload their curated gene sets and pathways for review and subsequent consideration for inclusion into the PAGER-CoV database. The submission file could be either differential gene expression format (DEG) or literature-curation format (LIT), as described on the 'Contribute' page. After submission, the contributed data will be checked for quality and eventually integrated into the PAGER-CoV after passing quality checks. (iv) Download the database. On the 'Download' page, users can download different database versions. This feature allows users to perform independent GSEA analysis. PAGER-CoV is free and open to all users, and there is no login requirement. The PAGER-CoV website features an improved user interface and user-upload schema over the related PAGER 2.0 database, with a more intuitive user-side browsing, analysis, and submission experience (Figure 3). To improve user navigations, we restructured the PAGER web interface to have the 'Basic Search' function as the feature-infocus on the PAGER-CoV home page. We also streamlined the navigation from one PAG to related PAGs, by adding a 'related PAGs' box to the right of each PAG's summary content. Data processing related to the case study To show that PAGER-CoV improves COVID-19 functional genomics analysis, we compared the GSEA (10) results between two conditions: one using PAGER 2.0 as the reference pathway/gene set collection, the other using PAGER-CoV as the reference pathway/gene set collection. We selected the 'Transcriptional response to SARS-CoV-2 infection' from GEO data series (ID: GSE147507) (32) for the case study. In the step of data filtering, all four control samples from the 'NHBE Mock' and three 'NHBE CoV' experimental samples were processed in parallel using the DESeq2 (33) pipeline. Then, we performed standard GSEA analysis (10) by comparing the results using the PAGER-CoV database (release date: 3 August 2020) and the results using the standard PAGER 2.0 database (14). For the GSEA analysis, the GSE147507 downloadable files for normalized gene expression matrix and the sample label file 'GSE147507.all.label.gsea.cls' were used (Supplemental File S1). GSEA chip platform choice 'ftp.broadinstitute.org://pub/gsea/annotations versioned/ Human Symbol with Remapping MSigDB.v7.1.chip' were used, whereas all other parameters were set to GSEA software (https://www.gsea-msigdb.org/gsea/downloads. jsp) default. For candidate PAGs for GSEA analysis, we used only PAGs with gene sizes between 15 and 500. After filtering, 18 136 candidate PAGs in PAGER 2.0 and 4 612 candidate PAGs in PAGER-CoV remained. PAGER-CoV data compilation and data quality assessment In PAGER-CoV, we compiled a total of 11 835 PAGs from 33 data sources. Table 1 shows a summary of PAG counts categorized by the data source. There are 13 data sources covering 271 PAGs manually curated from SARS-CoV-2 literature or relevant databases, 1 549 PAGs web-scraped from the COVID-19 PubChem database, and 19 PAGER 2.0inherited data sources comprising 10 015 viral and immunerelated PAGs inherited from PAGER 2.0. Figure 2 shows the nCoCo score distribution for all the PAGs (P-type, A-type, and G-type) distributed over different score intervals. Since nCoCo score is a measure of PAG data curation quality (see the Materials and Methods section for details), we can compare the relative distribution of PAGs over nCoCo score intervals to determine how biologically 'informative' these PAGs can be. The quality score distribution result indicates that P-type PAGs in PAGER-CoV has the highest quality (nCoCo score mean = 8 126), followed by A-type PAGs as the second-highest (nCoCo score Total 11835 mean = 338), and followed by G-type PAGs as the lowest (nCoCo score mean = 155). However, the majority (92%) of all PAGs has a quality no less than the quality score cutoff ( = 1). Figure 3A-F demonstrate a typical searching session in PAGER-CoV. In Figure 3A (basic search), the user may enter a search term, such as 'spike protein', 'cytokine storm', 'ACE2', or 'TMPRSS'. Figure 3B shows the basic search result. Here, the 'ACE2' result contains 53 PAGs; 49 PAGs contain ACE2 genes (matched by 'member'), and 2 PAGs have 'ACE2' in the PAG description (matched by PAG description). Figure 3C shows the list of PAGs, sorted by the PAG size, when 'match by member' is selected. Selecting 'batched by PAG description' shows a similar result. Here, the user may also filter the PAG list by PAG Type, Source, and Organism. Figure 3D shows the PAG information when a specific PAG is selected. From here, the user can view which genes the PAG contains ( Figure 3E), how important each gene is in the PAG (quantified and sorted by the RPscore), and the relationship with other PAGs ( Figure 3F). By using PAGER-CoV as a comprehensive database for interactive browsing, researchers can quickly gather gene set information, identify related literature, and generate new hypotheses. PAGER-CoV reveals insights of how bronchoalveolar immune cells response to COVID-19 Since the lung is among the most common organ attacked by COVID-19, there have been many studies investigating the lung response to COVID-19. Therefore, we are interested in analyzing the single-cell transcriptomic data under COVID-19 using PAGER-CoV. Here, we processed raw single-cell RNA-seq data from the GEO database GSE145926 data set. The data set were collected from clinical bronchoalveolar lavage fluid samples from moderate vs. severe cases of COVID-19 (34). The significant differentially-expressed gene list that was computed using the Seurat pipeline (35) was used in the PAGER-CoV GSEA analysis. PAGER-CoV provided 692 PAGs (Figure 4A-C) with the default cut-offs as follows: 'type of PAG' is set to 'all', 'size of genes in PAGs' ranges from 2 to 5 000, 'similarity score' ≥ 0.05, 'number of overlapping genes' ≥ 1, 'nCoCo' ≥ 0, 'P-value' ≤ 0.05, 'False Discovery Rate'-adjusted P-value (FDR) ≤ 0.05, 'species' is set to 'all', and all 'data sources' are selected. Among the top ten results retrieved by FDR, all are directly related to coronavirus infections, eight of which are manual curated PAGs. Interestingly, two (MAX000504, MAX000342) of the ten top-ranked PAGs were imported from PAGER from the same study (36), which are up-regulated and downregulated gene sets in response to Epstein-Barr Virus (EBV) infection in individuals with nasopharyngeal carcinoma epithelial cancer ( Figure 4D). Other neighboring PAGs related to MAX000504 may also have major roles in the COVID-19 immune response. For example, GEX000051, a top-ranked downstream regulatory PAG for MAX000504, was shown as derived from a 'genome-wide association study of maternal cytomegalovirus infection and schizophrenia' (37). This molecular gene set evidence confirms the potential linkage between COVID-19 and the psychiatric and neurological effects of SARS-CoV-2 infected patients, which reported the clinical observation of COVID-19 Psychosis in many patients (38) (39). Meanwhile, although MAX000342 is indirectly related to this study, the 277 down-regulated genes identified from Epstein-Barr Virus (EBV)-associated nasopharyngeal carcinoma epithelial cancer tissue samples contain the host MHC Class I HLA gene family members (40). Susceptibility to COVID-19 severity based on immune MHC haplotype is an area being actively investigated (41) and supported by increasing evidence (42). Other downstream regulatory PAGs to MAX000342 are reported by PAGER-CoV ( Figure 4E). Users can download the search results and explore PAGs further with their own desktop computers. PAGER-CoV enhances GSEA analysis in COVID-19 specific study Using the differentially express genes in GSE147507 dataset as the input, our results show that GSEA supported by PAGER-CoV is better than the same analysis supported by general-purpose gene set databases such as PAGER 2.0 ( Figure 5 In the original study of GSE147507, the authors reported a unique transcriptional response of cells infected with SARS-CoV-2 unique from other known respiratory viruses, namely, a markedly subdued interferon-I and -III expression as well as higher chemokine expression (most notably IL-6). Our GSEA PAGER-CoV-GSEA case study results are consistent with these findings because we observed significant enrichment of the PAGs relating to 1) cytokine response and inflammation (WIG000864, WIG001072 and WIG000005), in Set B2, 2) NF-kB signaling (WIG000733 in Set B1; FEX000120 in Set C), and 3) other immune pathways upstream of IL-6 expression (WIG001050 in Set B2; WAG000055 in Set C; and FAX000905 in set B1). Interestingly, three PAGs of high significance relating to the nervous system (WIG000823, FEX000140, WIG000048) from three unique data sources (WikiPathways, GeneSigDB, Reactome) were enriched in the PAGER-CoV-GSEA, suggesting strong biomolecular mechanistic links between COVID-19 and damage to the nervous system as reported by (43). Supplementary Table S2. DISCUSSION In this work, we describe the development of a comprehensive coronavirus-related gene set database for functional genomic downstream studies. With the continued influx of genomic and functional data, PAGER-CoV database content will need to be periodically updated. We expect the update will primarily be based on the framework described earlier to include both manual curated PAGs from literature and automatically imported PAGs from gene set databases with refined search terms. To make the database truly useful, future developers must consider the delicate balance between comprehensive coverage, the data quality, and potential impact on GSEA analysis recall performance among candidate PAGs. While we designed the database web user interface to be minimalistic for ease of navigation, we plan to introduce additional database features, e.g., reference data source links, additional PAG curation, and links to applications for network visual analytics, as this resource grows it's user base. DATA AVAILABILITY PAGER-CoV is freely available to the public without registration or login requirements (http://discovery.informatics. uab.edu/PAGER-CoV/). The data is available for download based on the agreement of citing this work while using the data from PAGER-CoV website.
5,885.6
2020-11-27T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
Influence of peer networks on physician adoption of new drugs Although physicians learn about new medical technologies from their peers, the magnitude and source of peer influence is unknown. We estimate the effect of peer adoption of three first-in-class medications (dabigatran, sitigliptin, and aliskiren) on physicians’ own adoption of those medications. We included 11,958 physicians in Pennsylvania prescribing anticoagulant, antidiabetic, and antihypertensive medications. We constructed 4 types of peer networks based on shared Medicare and Medicaid patients, medical group affiliation, hospital affiliation, and medical school/residency training. Instrumental variables analysis was used to estimate the causal effect of peer adoption (fraction of peers in each network adopting the new drug) on physician adoption (prescribing at least the median number prescriptions within 15 months of the new drug’s introduction). We illustrate how physician network position can inform targeting of interventions to physicians by computing a social multiplier. Dabigatran was adopted by 25.2%, sitagliptin by 24.5% and aliskiren by 8.3% of physicians. A 10-percentage point increase in peer adoption in the patient-sharing network led to a 5.90% (SE = 1.50%, p<0.001) increase in physician adoption of dabigatran, 8.32% (SE = 1.51%, p<0.001) increase in sitagliptin, and 7.84% increase in aliskiren adoption (SE = 2.93%, p<0.001). Peer effects through shared hospital affiliation were positive but not significant, and medical group and training network effects were not reliably estimated. Physicians in the top decile of patient-sharing network peers were estimated to have nearly 2-fold stronger influence on their peers’ adoption compared to physicians in the top decile of prescribing volume. Limitations include lack of detailed clinical information and pharmaceutical promotion, variables which may influence physician adoption but which are unlikely to bias our peer effect estimates. Peer adoption, especially by those with whom physicians share patients, strongly influenced physician adoption of new drugs. Our study shows the potential for using information on physician peer networks to improve technology diffusion. Introduction Diffusion of technology in US healthcare, while influenced partly by payer policies regarding coverage and reimbursement, is to a large extent driven by the individual decisions of practicing physicians. The relative absence of centralized technology assessment in the US reflects the importance of physician autonomy, allows for flexible responses to rapidly changing evidence, and creates opportunities to tailor decisions to individual patients. However, it comes at the cost of wide geographic variation, [1,2] excess spending, and sluggish translation of evidence into practice. [3,4] Recognizing that no US regions provide uniformly better care, an Institute of Medicine report recommended that efforts to achieve high-value healthcare target the loci of decision-making-hospitals, physician groups, and individual providers [5]. Yet the scale of changing provider behavior at the individual-level is daunting. To make the most efficient use of resources for educating the workforce, [6][7][8] providers may be viewed as embedded in social systems. [9][10][11] While it is known that physicians learn from each other, the magnitude of peer influence is poorly understood and is a largely untapped resource. The tools of social network analysis can be used to map the connections among providers, and identify those playing a central role among their peers. [12] The extant studies applying social network methods to technology diffusion among physicians, [11,[13][14][15][16][17][18][19][20][21][22] have been limited to uptake of a single technology, and small physician samples. [16,21,23,24] Prior studies have relied primarily on physicians' self-reported information on peer connections-information that, while informative, would be cost-prohibitive to collect on a large scale. We take advantage of increasingly large and detailed healthcare databases to examine the value of harnessing social network information to drive physician adoption of evidence-based technologies. First, we constructed peer networks among nearly 12,000 physicians drawing on multiple sources of information to form peer networks based on shared patients, practice settings, and training. Second, we estimated the magnitude of peer influence on technology adoption using as natural experiments the introductions of three new prescription drugs varying in novelty, clinical indication, number of competitors, and the specialties of physicians prescribing them. Third, we illustrate how simple information on physicians' positions in their peer networks can be used to target interventions to change physician behavior. Study setting and data sources Our study setting was Pennsylvania, the 5 th largest US state, the population of which mirrors national averages in socio-demographic characteristics and on measures of health care utilization. [25] The study period during which the three drugs of interest were introduced and over which we measured their adoption was 2007-2011. We obtained 5 data sources all of which New medications of interest We measured physician adoption of dabigatran (an oral anticoagulant initially approved to treat atrial fibrillation on 10/19/2010), sitagliptin (an oral dipeptidyl peptidase-4 inhibitor approved to treat diabetes on 10/16/2006), and aliskiren (an oral direct renin inhibitor approved to treat hypertension on 3/05/2007) (S1 Supporting Information). All three medications were first-in-class, with a novel mechanism of action, although they varied in the extent to which they were viewed as superior within the broader therapeutic class, [27][28][29][30] the relevant patient populations, and the availability of substitutes. Physician cohorts We constructed three cohorts of physicians applying 4 broad inclusion criteria. We required physicians to: a) prescribe medications in one or more drug classes of interest (oral anticoagulant, antidiabetic or antihypertensive medications) during the study period (Parts a-c of S1 Table); b) have an AMA Masterfile record and a Pennsylvania practice address; c) have a record in HCOS database; and d) demonstrate some minimal prescribing in the therapeutic class in the first 15 months after the new drug was introduced (minimal defined as !1 prescription/quarter) (S1-S3 Figs for cohort construction). Peer network construction Physicians form relationships with peers whom they meet during training, and in office-and hospital-based practice settings, and form referral and information networks with peers both within and outside of their own health systems. [11,14,15,[31][32][33][34][35] To capture these rich, and potentially overlapping peer networks we formed 4 types of physician social networks illustrated in Fig 1. We used the network analysis library Igraph in python [36] to construct the networks, and measure network characteristics. Network construction is briefly summarized here with additional information provided in S1 Supporting Information. First, we constructed patient-sharing networks (P) (depicted by black lines in Fig 1) using a previously published approach developed using patient-sharing in Medicare data. [37][38][39] Barnett and colleagues used self-reported connections to validate claims-based connections, reporting that two physicians billing Medicare for at least 9-10 patients in common during the same year were highly likely to self-report having a relationship based on referrals and/or advice. [37] We extend prior studies using data from a single payer to construct physician networks [40][41][42][43] by combining Medicare and Medicaid claims for unique patients to form the patient-sharing network. Second, we constructed a medical group network (G) using data from HCOS. We identified as medical group peers all physicians in the prescribing cohort with whom a physician shared a medical group or clinic affiliation. Third, a hospital network (H) was constructed similarly using data from HCOS on shared hospital affiliation (e.g., attending or admitting); we include in the hospital network all physicians in the cohort with a shared hospital affiliation. Last, a training network (T) was constructed using AMA Masterfile data on institutions attended and dates of graduation. Two physicians were connected if they attended the same medical school or the same residency program within +/-1 year of each other. Measuring adoption We defined adoption as writing at least the median number of prescriptions among physicians prescribing the new drug at least once in its first 15 months on the market. The Within the cohort of physicians prescribing the drug class of interest, she is connected to peers (shown in pink) with whom she attended the same medical school (one year +/-) or with whom she completed the same residency program (one year +/-). She is connected to peers (shown in yellow) through the medical group where she has an outpatient practice and to peers (shown in blue) through the hospital where she admits patients. In addition, she shares Medicare and Medicaid patients with several physicians. The patient-sharing network is represented by the lines in the figure; line thickness corresponds to the number of patients shared between physicians. Connections shown in orange are affiliated with the physician in this illustration through shared training institution and medical group. Connections shown in green share a medical group and hospital affiliation in common with the physician. Connections shown in purple only have shared patients with the physician. medians were 7, 13, and 7 prescriptions for dabigatran, sitagliptin, and aliskiren, respectively. We tested the robustness of our findings to alternative specifications of the adoption measure (e.g., > = 1, > = 15) (S12 Table). Peer adoption in each network was the key independent variable in our analysis and was measured as the fraction of a physician's peers who adopted the new drug (i.e., who had at least the median number of prescriptions in the first 15 months the new drug was on the market). This peer adoption measure (the "peer adoption rate") was computed in each of the four networks separately as we hypothesized each to have a distinct influence. Hence, there are four separate adoption variables among peers in the: (1) patient-sharing network, (2) medical group network, (3) hospital network, and (4) training network. Peers in the training, medical group and hospital networks were assumed to have equal influence, while peers in the patientsharing network were weighted based on the number of patients shared (i.e., a weighted average was used to compute the adoption rate in the patient-sharing network) (S1 Supporting Information). Statistical analyses We fit both linear probability models and logistic regression models to estimate the influence of these peer adoption rates and other factors, on the individual adoption outcome for each physician. Comparing the estimated outcomes obtained from linear and logistic models, we found that the mean and median differences of the estimated values were 0.1% and the first and third quartiles of the difference were within 3%. Accordingly, and to allow us to easily implement instrumental variables methods, we used linear probability models specified as follows: where y i is the binary indicator of adoption. Covariates included: characteristics of the individual physician (x i ) such as demographics, training, specialty, prescribing volume, and location; the age distribution and payer mix of the physician's patients filling prescriptions (also included in x i ); and the four variables for peer adoption rates in each network (" y Pi to " y Ti )-the coefficients on these variables being key estimates of interest (S1 Supporting Information for additional details). We checked the overlap in connections among these 4 networks and determined that it was low (<20%) allowing us to estimate the effect of all 4 simultaneously in the same model. The estimation of peer effects with observational data has many challenges. [44][45][46] The most basic problem is that each physician influences her peers just as they influence her (simultaneity). In addition, physicians may choose peers who are similar to them (homophily), and there may be other, unobserved factors that are common among groups of peers. To address these sources of bias we use an instrumental variables approach to estimation that is common in the econometric literature on peer effects. [47] A set of exogenous characteristics of peers, such as their sex, age, and location, serve as "instruments" which predict peer adoption rates in the absence of confounding factors. Estimation proceeds via two-stage least squares. In the first stage, the means of exogenous peer characteristics were used to predict the adoption rates in each peer network (S1 Supporting Information). In the second stage, the predicted adoption rates were then used in place of the observed adoption rates to estimate our main model. This approach yields consistent estimates as long as peer characteristics meet two criteria. First, they must have no direct influence on an individual's adoption outcome and must not be correlated with unobserved confounders. For example, the proportion of peers who are female should not directly affect whether a physician adopts the drug, net of the adoption rate among those peers. By contrast, we believe the proportion of peers who are relevant specialists or high-volume prescribers may have a direct influence on physician adoption. For example, a primary care physician may be less likely to adopt a new drug if he can easily refer complex patients to specialists in the same hospital. Hence, neither the specialty mix nor the prescribing volume of peers are used as instruments. Instead, we include eight variables for the proportions of peers in each network who are relevant specialists (" c Pi to " c Ti ) or are high-volume prescribers (" v Pi to " v Ti ) in the main model. The assumption that the other peer characteristics are valid instruments can be partially assessed, [48] and this assessment is presented in detail in the S1 Supporting Information. Overall the results indicate that our instruments are not correlated with unobserved confounders: the null hypothesis of no confounding is not rejected for two out of the three drugs (p = 0.213 for sitagliptin, p = 0.500 for sitagliptin). The diagnostic testing for dabigatran suggests that unmeasured confounding could be an issue (p = 0.031), but given that a rejection of the null can also occur as a consequence of functional misspecification (e.g., variable definitions, presence or absence of interactions, etc.), and because there is no known reason why this drug would be different from the others, we maintain dabigatran in the present analysis. Second, the means of peer characteristics must be sufficiently predictive of the peer adoption rates. We assessed this with two sets of tests for weak instruments commonly used in the literature. [49] For all three drugs the peer characteristics were sufficiently predictive in the patient-sharing and hospital networks, with first-stage F-statistics above 10, but not in the medical group and training networks. We then tested whether the instruments were sufficiently predictive for the patient-sharing and hospital networks jointly, using the minimum eigenvalue statistic, which confirmed that the instruments have sufficient power in these two networks (S8 Table). As a consequence of this assessment, we consider the estimated peer effects in the patient-sharing and hospital networks to be statistically reliable, but not those in the medical group or training networks. After quantifying the magnitude of peer influence in each network, we used our estimates to compute an aggregate "social multiplier" on adoption [50] (S1 Supporting Information). Conceptually, the social multiplier represents the number of other physicians expected to adopt a new drug following adoption by a given individual physician. This reflects both the direct influence a physician has on her own peers as well as her indirect influences on others throughout the network. The formula for the social multiplier in the patient-sharing network, our focal network, is as follows: where w ji is the weight on the link between physicians j and i based on the number of shared patients, and γ P is the coefficient on the peer adoption rate from the linear probability model of adoption. The social multiplier is an eigenvector centrality equivalent to a type of Bonacich power centrality [51] in a weighted, directed graph, where γ P serves as the attenuation parameter. We calculated the social multiplier for each physician in the network, and then used this to conduct a simulation using the patient-sharing network and adoption of dabigatran to illustrate. Imagine a health system wishing to achieve rapid diffusion of a new technology seen as cost-effective for treating a condition. In order to target scarce resources efficiently that system could target interventions to physicians seeing a high volume of the relevant patient population. Alternatively, the payer could target interventions to physicians with many peer connections observed in administrative claims data. To illustrate the value of targeting physician interventions based on social network vs. other physician characteristics, we present the average multiplier (i.e., number of other physicians who would adopt a drug) by physician prescribing volume, and a simple measure of network centrality: the number of direct peer connections a physician has (through patient-sharing), also known as degree. [52] We used SAS 9.4 (Cary, NC) for data management and variable construction, the network analysis library Igraph in python [36] to construct the peer networks, STATA V14 to estimate the instrumental variable models, and R x64 3.3.2 to compute the multiplier described above. This study was approved by the University of Pittsburgh Institutional Review Board. Descriptive characteristics of physicians We examined adoption of new drugs among 7,785 physicians prescribing anticoagulants, 8,257 prescribing antidiabetic medications, and 9,974 prescribing antihypertensives. Physicians were, on average, 49-50 years old and roughly one-quarter were female (Table 1). Most (61-72%) physicians prescribing these medications were in primary care specialties (e.g., internal medicine). The medical sub-specialties of interest-cardiology, nephrology, and endocrinology-made up 3-14% of physicians depending on the cohort. Most physicians were affiliated with at least one medical group (69-75%) and hospital (90%). The proportion of physicians adopting the new drugs varied; one-quarter (25.2%) of anticoagulant prescribers adopted dabigatran, a similar fraction of antidiabetic prescribers adopted sitagliptin (24.5%), but fewer antihypertensive prescribers adopted aliskiren (8.3%) ( Table 1). Several characteristics differed significantly in bivariate analyses between adopters and non-adopters including the percent who were specialists and had high prescribing volumes (S3-S5 Tables). In each cohort, the patient-sharing network had the greatest number of peers, ranging from 200-344 depending on the cohort, with a small share of providers (2-6%) having no peers in the patient-sharing network (S7 Table). Physicians in each cohort shared at least one hospital affiliation with an average of 166-204 peers. Physicians had fewer peers in the medical group (9-12) and training networks (39)(40)(41)(42)(43)(44)(45)(46). Unadjusted association between peer adoption and own adoption In unadjusted analyses, a physician's likelihood of adopting a drug was strongly associated with the likelihood that her peers had also adopted the drug. For example, a physician for whom 2/3 of her peers had adopted dabigatran was twice as likely to adopt the drug compared to a physician for whom one-third of her peers had adopted the same drug (Fig 2a). Adjusted estimates of peer effects on adoption In the fully-adjusted instrumental variables analysis, peer effects on adoption were largest in the patient-sharing network for all three drugs (Fig 3). For example, among anticoagulant prescribers, the coefficient for peer effects in the patient-sharing network was 0.590 (standard error (SE) = 0.150, p<0.001). This implies that for every 10 percentage point increase in the fraction of peers in the patient-sharing network adopting dabigatran, a physician's own adoption probability increased by 5.90%. The coefficients on peer adoption in the patient-sharing network were of similar magnitude for the other two new medications. A 10 percentage point increase in peer adoption corresponded to a 8.32% (SE = 1.51%, p<0.001) increase in sitagliptin adoption, and a 7.84% increase in aliskiren adoption (SE = 2.93%, p<0.001) (S10 Table displays full model estimates). After adjusting for other peer effects and covariates, own adoption was positively influenced by peer adoption in the same hospital although the effect was not statistically significant for any of the drugs at p<0.05 (Fig 3). The coefficient for hospital peer adoption was 0.319, SE = 0.188, p = 0.09 for dabigatran; 0.376, SE = 0.211, p = 0.08 for sitagliptin; and 0.367, SE = 0.261, p = 0.16 for aliskiren. Estimates of medical group and training network effects were not reliable for any of the three drugs due to weak instruments. Fig 4 shows the social multipliers using the adoption of dabigatran in the patient-sharing network to illustrate the potential for using social network information to target interventions. The figure presents the average multiplier by decile of peer connections (i.e., number of physicians with whom a physician shares patients) and by decile of prescribing volume and shows the value of targeting interventions using peer network information. Adoption by physicians in the top decile of connections is projected to induce 28 times as many other physicians to adopt compared with physicians in the bottom decile (4.81 vs. 0.16 physicians adopting). By contrast, targeting physicians in the top volume decile is projected to induce only twice as many adoptions as targeting physicians in the bottom decile (2.64 vs. 1.33 physicians adopting), and little more than half as many as targeting the top decile of connections (2.64 vs. 4.81). Discussion We find that physician adoption of new drugs is heavily influenced by the extent to which their peers have adopted those new drugs. These effects were particularly large for peers with whom physicians share patients. Our study points to a potential mechanism underlying the tremendous variation in US medical care and to the importance of viewing physicians as part of a larger social system. We sought to measure and test the effects of a rich set of peer influences derived from the institutional affiliations that physicians have with medical group practices, hospitals, and health systems. We also examined the influence of peers in the informal networks physicians form as they develop referral relationships, and interact with other physicians in the management of shared patients. Notably, peers in these informal patient-sharing networks had the largest effects on adoption of all three drugs after adjusting for the influence of peers in the same medical group and hospital setting. The patient-sharing network may exert the most influence on physician adoption because it captures more active connections over which physicians exercise the most control. A physician likely has more discretion, for example, over Influence of peer networks on physician adoption of new drugs whom he refers his patients to than he does over the physicians with admitting privileges to the same hospital. This element of choice in the patient-sharing network raises the possibility that our estimates may be influenced in part by homophily [53,54] although our use of instrumental variables reduces this as a concern. A key advantage of using patient-sharing information to measure peer networks is the routine availability of claims data to payers, the stakeholders best-positioned to invest in large-scale interventions to improve the quality of care. The magnitude of the patient-sharing peer effects on physician adoption was broadly consistent across three drugs introduced over a 4-year time period that were prescribed to different patient populations, and by physicians with primary care and medical sub-specialty training. We studied the introduction of three newly-approved drugs that were first-in-class, each with a novel mechanism of action. All three medications have a place in the treatment armamentarium alongside older, existing therapies but varied with respect to whether they are considered first-or second-line treatments. That the magnitude of peer influence on adoption was comparable for all three of these drugs adds to the generalizability of our findings. Meltzer and colleagues describe an approach to using social network analysis to form quality improvement teams to maximize the reach of interventions. [11] Similarly, our social multiplier exercise illustrates that targeting interventions to physicians who are well-connected in patient-sharing networks may be a more efficient way of improving the diffusion of evidencebased therapies. For example, health systems routinely adopt intensive educational interventions such as academic detailing-face-to-face educational programs borrowing principles Estimates show the effect of a 1% absolute change in the adoption rate of a physician's peers on the likelihood of a physician's decision to adopt the new drug. For example, a 1% increase in patient-sharing network peer adoption of dabigatran corresponded to a 0.59% increase in the probability of own adoption. Estimates are from a two-stage least squares regression model including physician-level characteristics: sex, medical school graduation year, US vs. non-US medical school, Top 20 U.S. medical school vs. not, geographic indicators (hospital referral region and metropolitan vs. non-metropolitan, prescription share paid for by Medicare, Medicaid fee-for-service, and cash, and age of patients filling prescriptions (see S10 Table for full model estimates). The means of peer characteristics serve as instruments for peer adoption rates. Included in the second stage were: physician specialty (primary care physician, relevant sub-specialty; e.g., cardiologist, endocrinologist, nephrologist vs. other), proportion of peers in each network who are in each specialty group, an indicator for whether the physician was a high-volume prescriber (> = median), the proportion of peers in each network who were high-volume prescribers, and an indicator for physicians who do not have peers in a particular type of network in the cohort. Peer effects estimates in the medical group and training networks are not reliable due to poor predictive power of the instruments. https://doi.org/10.1371/journal.pone.0204826.g003 Influence of peer networks on physician adoption of new drugs from pharmaceutical promotional efforts. These educational interventions may reduce overprescribing of ineffective medications, or increase use of highly effective agents. [7] Because academic detailing visits are meant to be in-depth and frequent [55] they are difficult to deliver on a large scale. However, systematic reviews suggest that academic detailing is seldom targeted. [56] Our study relates to an extensive literature on speeding diffusion of innovations in healthcare and other sectors by targeting 'key opinion leaders' who by virtue of their technical competence, credibility, and/or social acceptability are seen as influential with their peers. [57][58][59][60] What our study adds to this body of work is information on the potential for using patientsharing based measures of network centrality to target influential physicians at a larger scale than a single health care setting or community. [61] We focus on the most basic centrality measure, e.g., degree centrality, because it is simple to construct facilitating its widespread use and does not require complete observation of the network. Other centrality measures such as eigenvector centrality, Bonacich centrality, PageRank centrality, density centrality have been used to characterize other technological diffusion processes [51,[62][63][64] and may be useful in this context. Our study improves on prior work by including a larger physician sample, examining the diffusion of multiple new drugs, drawing on several information sources to form 4 types of physician networks, and using an instrumental variables approach to overcome some challenges in estimating network effects. Nevertheless, we note some limitations. First, although Pennsylvania resembles national averages on most measures of healthcare utilization, [25] our Influence of peer networks on physician adoption of new drugs findings are limited to a single state and estimates of peer effects on physician adoption may not generalize to other geographic areas with different physician network structures [38] or adoption patterns. Second, while our study of adoption of three new prescription drugs improves on prior studies of uptake of a single technology, our estimates of peer effects may not generalize to all medical technologies. Third, while we adjusted for several physician characteristics associated with adoption we lack information on important sources of influence on physician behavior, namely patient clinical characteristics, pharmaceutical company promotion, and payer formularies. Only having access to information on the age and source of payment for a physician's patients who filled their prescriptions we are unable to adjust for differences in adoption due to patient health state. We note that all three of the new drugs were first-in-class and likely to be heavily promoted by manufacturers; however, our study predates availability of physician-level data on exposure to industry promotion. We were not able to adjust for differences in formulary coverage of the new drugs for a physician's patient panel. Fourth, as with any study that uses instrumental variables, our estimates capture the effects of peer adoption rates that were driven by the peer mean characteristics we use as instruments, not by other sources of variation, which potentially limits their generalizability. Using multiple data sources to measure the rich set of peer relationships formed by physicians we find that peers can exert significant influence over physician technology adoption decisions. Our study shows the potential for using information on physician social networks routinely available to health systems to improve the targeting of interventions to speed the diffusion of evidence-based health care technologies. Table. Sample size of physicians in each prescribing cohort with Medicare and/or Medicaid claims during adoption measurement period. Sources: Medicare data were obtained from CMS. Medicaid data were obtained from the Pennsylvania Department of Human Services. Notes: In the anticoagulant cohort, there are 7,785 physicians meeting inclusion criteria of whom 7,522 (96.6%) had Medicare claims and 6,680 (85.8%) had Medicaid claims. We included claims submitted by those physicians to Medicare and Medicaid with dates of service between 10/1/2010 and 12/31/2011 to match the period over which we measured adoption of dabigatran. In the antidiabetic cohort, 8,257 physicians met inclusion criteria. We included claims from Medicare and Medicaid submitted by those physicians with dates of service between 1/1/2007 and 1/31/2008 (the measurement period for adoption of sitagliptin). There are 9,974 physicians meeting inclusion criteria for the antihypertensive prescriber cohort. We included claims from Medicare or Medicaid with dates of service between 3/1/2007 and 5/31/ 2008, the measurement period for adoption of aliskiren. (DOCX) S7 Table. Number of physician peers to which physicians are connected in patient-sharing, medical group, hospital and training networks. Data sources: QuintilesIMS, HCOS; XPonent; AMA Masterfile 1 Column displays the number of physicians who are not connected to any peers in a particular network. For example, physicians may lack peers in the patient-sharing network because they do not see patients with Medicaid and Medicare patients or because they did not share any patients with that source of coverage with other physicians in the prescribing cohort. (DOCX) S8 Table. Assessments of instrument exogeneity and relevance. (DOCX) S9 Table. Peer characteristics used as instruments in each network. Data sources: Quintile-sIMS, HCOS; XPonent; AMA Masterfile à In the analytical dataset, the unit is physician. For categorical variables, we obtain the proportion of each type (e.g., proportion of peers who are female) for each physician. For the continuous variables, we use the average percentage of pay type (e.g., Medicare) of all peers for each physician. Table shows
6,924.4
2018-10-01T00:00:00.000
[ "Medicine", "Economics" ]
New insights gained from museum collections: Deep-sea barnacles (Crustacea, Cirripedia, Thoracica) in the Muséum National d’Histoire Naturelle, Paris, collected during the Karubar expedition in 1991 An examination of the deep-sea barnacles (Cirripedia, Thoracica) collected by the Karubar expedition to Indonesia (1991) and deposited in the Muséum National d’Histoire Naturelle, Paris, identified 40 species contained in three families of stalked and five families of acorn barnacles. Information on these species is presented, including descriptions, updated distributions and images to aid species identification. Thirty of the species, treated herein, are new records for the Indonesian Kei Islands and Tanimbar Island, which increases the total number of species recorded from Kei Islands, Aru Island and Tanimbar Island to 40. This study demonstrates the value of museum collections as a resource in biodiversity science. Introduction In 1991, scientists from France and Indonesia conducted collaborative research through the Karubar expedition in Indonesia. The acronym for this expedition, which collected the material reported on herein, is a contraction of the names of the Kei, Aru and Tanimbar Islands. These Islands attracted attention after Professor Th. Mortensen's Danish expedition to the Kei Islands . Mortensen suggested that the Islands were an ideal place for a marine laboratory to study deep-sea fauna, as he had found stalked crinoids, elasipods and other abyssal creatures at depths of 200-400 m around the Kei Islands (Crosnier et al. 1997). The Karubar expedition was part of the MUSOR-STOM-Tropical Deep-Sea Benthos programme (1976-present). This programme was a collaboration between the Muséum National d'Histoire Naturelle (MNHN), Paris and the Institut de Recherche pour le Développement (IRD) (formerly ORSTOM), to explore the deep-sea fauna of the tropical Indo-Pacific. As the programme was inspired and guided by carcinologists, it is not surprising that ~ 33% of the papers resulting from these cruises concern crustaceans, especially crabs, lobsters and shrimps (Richer de Forges et al. 2013). Diagnosis. Capitulum with up to five plates, including tergum and scutum; scutum in some species split into two (resulting in seven plates); some or all plates may be degenerate or absent; umbos of terga apical, those of carina and scuta fundamentally basal; peduncle without calcareous scales; maxillule not stepped; cirrus I widely separated and much shorter than posterior cirri; caudal appendages uniarticulate, spinose. Diagnosis. Capitulum broadly oval, with five smooth plates; carina not extending to area between terga; peduncle with circles of small protuberances; cirri short. Distribution. Indo-west Pacific: Indian Ocean; Madagascar through Malaysia, Hong Kong, South China Sea; Taiwan; Philippines; South Japan; tropical West and central Pacific Ocean to Fiji and Hawaii; attached to decapod crustaceans; shallow water (Jones and Hosie 2016). In this study, Dianajonesia amygdalum was found at Tanimbar Island, Indonesia. Description. Capitulum oval, apex pointed, slightly thick, swollen. Scutum with larger segment strongly bowed, basal margin short, apex pointed; smaller segment bowed, terminating in point at base, tergal margin rounded, fitting exactly into excavation of tergum. Tergum triangular, characteristic excavation at scutal margin near occludent margin. Carina narrow, terminating in spatula-shaped disc. Cirrus I with anterior and posterior rami subequal (each five-segmented); cirri II-VI longer, more slender; cirrus VI with caudal appendages. Penis thick, ringed, especially mid-length, terminating in narrower, curved part. Maxillule notched, two large teeth on upper side; mandible with four teeth, largedistance between first and second teeth. Labrum convex, with numerous blunt teeth. Measurements of specimen: basal diameter of capitulum 1.06 mm; capitular height 7.12 mm; total height 12.69 mm; scutal width 3.08 mm; scutal length 6.09 mm; tergal width 1.30 mm; tergal length 3.47 mm. Description. Capitulum white, with five calcified plates, surfaces strongly striated. Scutum with basal margin rotated; tergum triangular in lateral view; carina with dorsal roof widening apically on either side of midline groove. Cirrus I with anterior ramus wider than posterior ramus. Maxillule with three strong setae at upper angle separated by wide notch; mandible with four teeth, lower angle sharp. Measurements of specimen: basal diameter of capitulum 2.14 mm; capitular height 9.36 mm; total height 9.36 mm; scutal width 3.81 mm; scutal length 7.28 mm; tergal width 1.58 mm; tergal length 4.29 mm. Distribution. West-southwest Pacific, Indo-west Pacific, East coast of Africa, Indian Ocean, north Australia, Indonesia, Malay Archipelago, East China Sea, South China Sea, Taiwan, Philippines, south Japan to New Zealand; attached to echinoid spines, antipatharians, gorgonians, glassy spicule of hexactinellid sponges, corallines; 125-984 m depth (Jones and Hosie 2016). In this study, Megalasma striatum was found at Kei Islands and Tanimbar Island, Indonesia. Diagnosis. Formerly, the subfamily was characterised by a subapical carinal umbo, inflexed carina and subapical umbones of the upper and inframedian latus (Zevina 1978a). Gale (2016) characterised the subfamily by the broad, low, straplike and gently incurved rostrolatus. The rostrum is broader than high, rectangular, trapezoidal or triangular and its large, triangular, lateral surfaces contact the interior of the rostrolatus. The articulation surface between the rostrum and rostrolatus extends over the entire height of both plates. Description. Capitulum flat, rather broad, not covered by distinct membrane. Scutum with occludent margin arched, forming with tergal margin a triangular portion projecting over tergum. Tergum surpassing scutal area with occludent margin almost straight. Upper latus quadrangular, angle at apex between scutal and tergal margins distinctly projecting over scutum. Rostrum small, triangular; rostrolatus very low, quadrangular; infra-median latus small, triangular, umbo at apex; carinal latus larger than other latera with carinal margin arched. Cirrus I with anterior and posterior rami almost same length; cirrus VI with long caudal appendages. Maxillule not notched, with large spine on upper side, cutting edge almost straight; mandible with three large teeth excluding inferior angle. Measurements of two specimens: height of capitulum 12.51-20.57 mm, width 7.39-10.55 mm, thickness 4.09-6.51 mm; length of peduncle 5.10-7.76 mm, width 4.36-6.62 mm. Type locality. East coast of Japan, between the Bay of Tokyo and the Inland Sea (Jones 1992). Remarks. For the first time, Scalpellum stearnsi was found in Japan and described by Pilsbry (1890). During the Siboga expedition (1899), S. stearnsi was collected from different locations in the Malay Archipelago with the depths varying from 204 m to 450 m. Hoek (1907) found intraspecific variations of the shell plate morphology. He then divided S. stearnsi into two groups, i.e. variety robusta and var. gemina, which differed in the shape of the tergum. The species S. stearnsi in this study belongs to the group of var. gemina because of the V-shaped tergum. Scalpellum stearnsi has a low period of larval development (Ozaki et al. 2008) and a slow growth rate (Yusa et al. 2018). This can result in the broad geographical distribution of this species. Recently, Lin et al. (2020) examined the diversity and genetic differentiation of populations of S. stearnsi from the East China Sea, West Philippine Basin, Sulu Sea and Caroline Trenches, which resulted in four distinct clades of S. stearnsi. Meroscalpellinae Zevina, 1978b: 1343 Diagnosis. Capitulum with 14 or 13 plates, reduced in differing stages or proportions; carina with two umbo positions; females considered rarer than hermaphrodites; males sac-like, usually without plates, rarely with two or four reduced plates. Alcockianum persona Description. Capitulum brownish, large, ovoid, inflated, with 13 capitular plates, including a vestigial rostrum, plates embedded and mostly concealed by thick, opaque membrane. Scutum small, widely separated from all remaining plates except tergum, margins not excavated or deeply concave; tergum reduced in form as four-pointed star, with two rays greatly and two rays slightly produced. Carina reduced in size, apex approaching terga, widely separated from remaining plates. Peduncle cylindrical, similar length to capitulum, with large calcareous scales arranged in alternating rows. Cirrus I with anterior ramus oval (8-segments), posterior ramus slender, long (12-segments); cirri II-VI slender, long, rami almost equal length; cirrus VI with caudal appendages; caudal appendages 1/3 length of cirrus VI, 15-segmented, tapering distally. Penis rather short, smooth, pointed. Maxilla bilobed, dense setae on margin. Maxillule relatively large, with broad, shallow excavation on lower margin occupying more than half margin, remainder of margin obliquely subtruncate; mandible with three main teeth in addition to inner angle, which is variously divided, broad as a whole. Measurements of five Distribution. Indonesian Seas, eastern Australia, New Zealand; 109-915 m depth (Jones 1992). In this study, Alcockianum persona was found at Tanimbar Island, Indonesia. Distribution. Eastern Indian Ocean; Northwest and Western Central Pacific; Malay Archipelago; Japan; Taiwan; Indonesia; attached to shell of gastropod, gorgonians, rocks; 805-6,810 m depth (Jones and Hosie 2016). In this study, Annandaleum japonicum was found at Tanimbar Island, Indonesia. Scalpellum molliculum Description. Capitulum compressed; 13 plates completely covered by fine, hairless membrane. Peduncle half length of capitulum, stout, cylindrical, armed with small, transversely elongated plates. Scutum subtriangular, lateral margin excavated with tooth above excavation blunt, short, simple; tergum almost triangular, scutal margin excavated, but not very boldly, occludent margin slightly, regularly convex outwards. Carina simply bowed, umbo subterminal, in contact with terga above or just entering between them. Cirrus I unequal, anterior ramus oval, posterior ramus slender, long; cirri II-VI slender, long, rami almost equal lengths; cirrus VI with long, slender caudal appendages. Maxillule slightly notched, two major setae on upper side; mandible with four teeth. Measurements of specimen: height of capitulum 18.88 mm, width 11.41 mm, thickness 6.30 mm; length of peduncle 11.45 mm, width 4.89 mm. Distribution. Gulf of Oman, Arabian Sea, Sri Lanka, Japan (Chan et al. 2009b). In this study, Annandaleum laccadivicum was found at Tanimbar Island, Indonesia. Distribution. Southeast Pacific Ocean (Newman and Ross 1971). In this study, Litoscalpellum walleni was found at Tanimbar Island, Indonesia. Desciption. Capitulum yellowish, with 13 fully calcified plates. Peduncles short with scales slightly overlapping in the middle part. Scutum with pit for complemental males, above shallow pit for adductor muscle. Carina wide in lower part, ribbed in upper part. Upper latus with straight sides; rostrum appearing externally as inverted triangle. Cirrus I unequal, anterior ramus oval, posterior ramus slender, long; cirrus VI with very short caudal appendages. Maxillule with notch between two or three stout setae at upper angle, group of more slender setae on cutting edge; mandible with three teeth excluding inferior angle; labrum cutting edge slightly concave, numerous pointed teeth on cutting edge. Measurements of two specimens: height of capitulum 12.44-13.88 mm, width 6.97-7.52 mm, thickness 2.77-3.47 mm; length of peduncle 2.99-3.15 mm, width 2.94-3.24 mm. Description. Capitulum yellowish, elongate-oval shape; surface with distinct lines of growth. Carina large, simply bowed. Scutum with umbo at apex, slightly recurved, projecting slightly over tergum; tergum triangular, stout, broad, apex recurved, scutal margin almost straight. Upper latus quadrangular, apex slightly projecting over scutum. Rostral latus quadrangular, scutal and basal margins parallel. Carinal latus quadrangular, carinal margin almost straight. Cirrus I unequal, anterior ramus oval, posterior ramus more slender. Maxillule with notch between two or three stout setae at upper angle, a group of more slender setae on cutting edge; mandible with three teeth excluding inferior angle; labrum with numerous blunt teeth on straight, cutting edge. Measurements of specimen: height of capitulum 7.26 mm, width 3.74 mm, thickness 1.40 mm; length of peduncle 3.00 mm, width 1.99 mm. Distribution. Indian Ocean, Antarctic and Southern (North East of Prince Edward Island); known depth 2,516 m (Shalaeva and Boxshall 2014). In this study, Amigdoscalpellum tenue was found at Kei Islands, Indonesia. Description. Capitulum long, narrow, sparsely covered with hairs, plates separated by narrow, chitinous interspaces, marked with growth lines. Occludent margin strongly convex; carinal margin irregularly straight; apex slightly retroverted towards carinal side. Carina long, simply bowed; roof flat; parietes well developed towards distal half of plate. Tergum triangular, occludent margin short, convex, scutal and basal margins almost straight, carinal margin concave. Scutum with umbo apical, overlapping occludent margin of tergum. Upper latus triangular; carinal latus twice as long as broad; inframedian latus rectangular; rostral latus nearly rectangular in outline; rostrum large, elongate triangular, broad above, pointed below. Cirrus I unequal, anterior ramus oval, posterior ramus more slender; cirrus VI with caudal appendages. Maxillule not notched stout spine along the cutting edge; mandible with three teeth excluding inferior angle. Measurements of specimen: height of capitulum 9.67 mm, width 5.52 mm, thickness 2.58 mm; length of peduncle 2.79 mm, width 2.29 mm. Distribution. Java Sea, Indonesia; Philippines (Chan 2009). In this study, Teloscalpellum ecaudatum was found at Kei Islands and Tanimbar Island, Indonesia. Distribution. Pacific, Western Central; South and East China Sea; South of Sumatra, Banda Sea, Indonesia; Vietnam; Philippines; Taiwan; South of Japan; attached to crinoids, hydroids; 220-1,097 m depth (Jones et al. 2001;Chan et al. 2009b;Shalaeva and Boxshall 2014). In this study, Trianguloscalpellum balanoides was found at Kei Islands and Tanimbar Island, Indonesia. Scalpellum imperfectum Description. Capitulum elongate, plates covered by thin, chitinous membrane. Scutum elongated, apex pointed, occludent margin very convex. Tergum flat, triangular, apex very recurved, occludent margin very arched. Carina with umbo at top of flat roof. Upper latus flat, irregular pentagonal; rostrum narrow, elongated; rostral lateral convex with rostral margin short; inframedian latus wine-glass-shaped; carinal latus flat, large. Peduncle short, calcareous scales distinct. Cirrus I unequal, anterior ramus broader than posterior ramus; cirri II to VI long, rami equal; cirrus VI with caudal appendages. Maxillule not notched, two large spines on upper side, cutting edge almost straight; mandible with three large teeth excluding inferior angle. Measurements of specimen: height of capitulum 9.14 mm, width 4.53 mm, thickness 1.74 mm; length of peduncle 2.54 mm and width 2.20 mm. Distribution. Atlantic, excluding polar areas; Pacific, Southeast. Known depth range 600 to 2,400 m (Shalaeva and Boxshall 2014). In this study, Verum carinatum was found at Kei Islands, Indonesia. Diagnosis. Shell not depressed; carina and rostrum interlocking with single rib from each plate; movable plates large, scutum with four articular ribs, tergum with six articular ribs, growth lines very distinct; caudal appendages long. Description. Shell yellowish. Movable scutum elongately triangular, apex distinctly beaked, projecting freely; sur-face with numerous articular ridges. Movable tergum large, quadrangular; surface with strongly developed, curved axial articular ridge. Carina and rostrum irregular quadrangular, with carina higher, rostrum broader. Fixed tergum with two parts: (1) triangular portion very narrow at apex, slightly broader in its inferior (2) flat and broad part at a rear portion of shell. Fixed scutum pointed with distinctly beaked apex; composed of broader, nearly flat, triangular portion and narrower inflected portion, only widening towards its inferior. Base of shell elongatedly oval-shaped. Cirrus I with rami very unequal (anterior ramus: 12-segmented, posterior ramus: 28-segmented); cirrus VI with caudal appendages. Maxilla bilobed, fringed with setae, except on the notch; maxillule widely notched, horizontally elongated, two large spines above notch, numerous dense setae at notch; mandible with three teeth excluding inferior angle; labrum slightly concave, conical teeth on cutting margin. (Chan et al. 2010). In this study, Altiverruca navicula was found at Tanimbar Island, Indonesia. Description. Movable plates parallel to base, wall of parietal vertically ribbed; fixed scutum without internal pit. Movable scutum with crescentic ridge and longitudinal striations; movable tergum with articular ribs and diagonal rib. Apices of fixed scutum and tergum contiguous. Carina occupying carino-rostral wall, apices marginal. Cirrus I with rami unequal and serrulate setae; cirrus VI with caudal appendages. Maxilla globular, with fringing setae; maxillule notched, two large setae on upper side; mandible with three teeth excluding inferior angle. Diagnosis. Shell with four or six plates; wall solid or permeated by single row of chitin-filled longitudinal canals; radii absent; one or both rami of cirri I and cirri II sometimes antenniform; labrum without notch in crest. Description. Shell yellowish, conical, with six plates. Orifice diamond-shaped; scutum triangular elongated with protruding growth-ridges; tergum smaller than scutum, apex beaked, carinal margin rounded, growth-ridges less distinct than on scutum. Cirrus I with unequal rami (anterior ramus: 8-segmented; posterior ramus: 12-segmented), dense long setae on surface areas. Cirrus II with equal rami, dense long setae. Cirri IV-VI with equal rami with numerous segments; segments almost without exception furnished with two pairs very long, stiff, needle-like spines along inner faces. Measurements of specimen: basal length of shell 14.32 mm, orifice length 8.00 mm, carinal height 12.39 mm, orifice width 6.49 mm, basal width 13.76 mm. Description. Shell yellowish with orange rust-brown in proximal areas. Carina, carinolatera and latera with pale orange-brown and rust red-brown longitudinal stripes, latter may have oblique white spots. Radii with pale orange-brown and rust red-brown horizontal striation. Oper- cular plates with scutum pink-brown, transparent; tergum transparent white. Shell may appear longer and lower, due to elongation of carina and rostrum or low and comparatively shorter, due to development of rostrum alone or more upright and comparatively higher, with neither carina nor rostrum elongated. Cirrus I with unequal rami (anterior ramus: 7-segmented; posterior ramus: 12-segmented). Cirri II-VI with equal rami, numerous segments. Penis very long, delicate hairs scattered over surface, a few more disposed near tip. Labrum deeply notched, two small teeth on each side of notch. Mandibles with five teeth, inferior angle not distinctly separated from fifth; distance between tips of first and second teeth slightly more than that between those of second and third teeth; third tooth larger; fourth and fifth smaller than others. Maxillule with straight edge and numerous large setae. Measurements of specimen: basal length of shell 7.73 mm, orifice length 4.60 mm, carinal height 8.70 mm, orifice width 3.88 mm, basal width 4.94 mm. Distribution. Indo-west Pacific: Indian Ocean; Gulf of Aden, India, east to Fiji and NW to Indonesia, N Australia, Malay Arch.; China; Philippines; S Japan; Fiji Is; attached to coenosarc of gorgonians or antipatharians; littoral-453 m depth (Jones and Hosie 2016). In this study, Conopea cymbiformis was found at Kei Islands, Indonesia. Type locality. Near Madras, India; attached to a gorgonian (Darwin 1854). Diagnosis. Shell with parietes and basis not porose; carino-lateral compartments very narrow, almost same width from top to bottom; radii with smooth sutural edges; scutum externally striated longitudinally. Conopea navicula (Darwin, 1854) Description. Specimens covered with coenosarc of coral, except orifice. Easily recognisable species due to narrow carino-lateral plate, which is nearly same width at top as bottom; scutum externally longitudinally striated; parietal plates studded with calcareous points. Parietal plates pearly white, solid, superficially appearing to possess longitudinal tubes, growth lines horizontal. Alae moderately developed. Basis calcareous. Size small. Rostrum well developed, concave, lying at angle of ~ 45°. Laterals very well developed. Carino lateral parietes thin, radii and alae well developed. Carina tall, about half width of rostrum. External surfaces of all parietes with very small, calcareous studs, regularly spaced, arranged along horizontal growth lines. Opercular plates sunk down into orifice. Cirrus I with unequal rami (anterior ramus: 5-segmented; posterior ramus: 7-segmented). Cirrus II with unequal rami (anterior ramus: 6-segmented; posterior ramus: 9-segmented). Cirri III-VI with subequal rami more slender, longer, with segments more elongate. Penis very long, tapering towards tip, bearing few, very minute hairs. Maxilulle with straight edge with numerous large setae. Mandibles with five teeth and inferior angle. Measurements of four specimens: basal length of shell 2.23-4.22 mm, orifice length 1.09-2.02 mm, carinal height 2.04-3.09 mm, orifice width 0.94-1.59 mm, basal width 1.79-2.80 mm. Distribution. Indo-west Pacific, from Gulfs of Aden and Persia, India, Malaysia, Indonesia, Gulf of Siam, to southern Japan; 45-220 m depth (Jones and Hosie 2016). In this study, Conopea navicula was found at Tanimbar Island, Indonesia. MNHN-IU Description. Shell with plates ribbed longitudinally. Shell colour brownish-pink to dull rose-pink, ribs tending to white, colour often faded with specimens appearing uniform white. Parietes of carinolatera very narrow, with single, conspicuous, longitudinal ridge. Scutum with occludent margin straight, surface indistinctly ridged, pit for adductor muscle scarcely visible. Tergum short, narrow, scutal margin straight, unusually distinctly dentated, carinal margin short, convex, depressor muscle crests moderately well developed. Opercular plates with long, golden setae fringing occludent margins, especially distally. Cirri I-II with rami slightly unequal, covered with setae; cirri III-VI longer, more slender, dense setae on inner face. Mandible with four teeth, second to fourth with accessory cusps, lower angle molariform with three blunt cusps in series, lower edge with row of stiff setae. Measurements of five specimens: basal length of shell 6.11-8.18 mm, orifice length 4.18-5.83 mm, carinal height 4.11-6.30 mm, orifice width 2.58-3.19 mm, basal width 4.51-5.96 mm. Distribution. Banda Sea (Moluccas, Indonesia); SW Australia; New Zealand; New Caledonia; Philippines to southern Japan; Malaysian water; Gulf of Oman, Persia. 27-502 m depth (Jones and Hosie 2016). In this study, Solidobalanus auricoma was found at Kei Islands and Tanimbar Island, Indonesia. Diagnosis. Shell with smooth, glossy white plates, coloured stripes absent; internal plates with thick, solid, finely ribbed longitudinally; base non-porous, radially ribbed. Description. Shell plates white, stripes absent. Several specimens with pale pink tinge, one with pale brownish-pink parietes with small, narrow ellipsoidal whitish spots, latter orientated longitudinally producing reticulated effect. Radii whitish, pink tinge along distal borders. Scutal growth lines without longitudinal striations; articular ridge absent; pit for adductor muscle small, round. Tergum with shallow, wide furrow running from apex to base. Cirrus I with unequal (anterior ramus: 7-segmented; posterior ramus: 15-segmented). Cirrus II with rami subequal (anterior ramus: 11-segmented; posterior ramus: 12-segmented). Cirri I and II with very dense, long setae on surface areas. Cirri III-VI with rami slightly subequal, rounded. Penis sturdy, not long. Labrum with very shallow notch, three or four irregularly arranged, blunt teeth on each side. Maxillule with distinct, narrow notch with two large setae on upper side. Mandibles with five teeth, second and third bifid and fifth is rudimentary. Measurements of five specimens: basal length of shell 6.23-12.08 mm, orifice length 4.63-9.62 mm, carinal height 3.48-13.22 mm, orifice width 3.24-6.33 mm, basal width 5.66-9.56 mm. Remarks. In the type description, Broch (1931Broch ( -1932 commented that the specimens were white, without stripes. However, several of the specimens collected by KARUBAR had a pale pink tinge and one specimen (from station DW22) had pale brownish-pink parietes with small, narrow ellipsoidal whitish spots, the latter orientated longitudinally, thus producing a reticulated effect. Radii whitish with pink tinge along distal borders. Striatobalanus amaryllis. -Jones, 2004: 150. -Chan et al. 2009b Description. Shell conical; tips of rostrum and carina slightly curved inwards. Orifice large, pentagonal, toothed. Colour yellowish-white, with slightly darker longitudinal lines on main parts of plates. Radii with very oblique summits, broadest a little distance from the orifice, narrower towards basis. Alae broader than radii, summits rounded. Specimen without scutum, tergum and soft parts. Measurements of specimen: basal length of shell 16.76 mm, orifice length 9.13 mm, carinal height 9.89 mm, orifice width 7.24 mm, basal width 15.14 mm. Subfamily AMPHIBALANINAE Pitombo, 2004 Amphibalaninae Pitombo, 2004: 263. Diagnosis. Shell with four or six plates; parietal tubes with one or more rows, commonly transverse septa; radii with transverse teeth on sutural edge with denticles on lower side only; alae not cleft; basis with single tubiferous; scutum with conspicuous adductor ridge; tergum with well-developed depressor muscle crests, growth lines in tergum spur display an obvious change in direction; second maxilla with smooth anterior margin of distal lobe, acuminate setae with enlarged, modified tips. Type locality. Natal, on a piece of bamboo (Darwin 1854). Remarks. Known as an important fouling species of ships and marine installations. The suggestion of anti-fouling paint on the bases of the specimens examined suggests that these specimens were probably knocked off the ship during trawling operations, explaining the great depth at which these specimens were collected, as the normal depth range is 0-9 m. Discussion Prior to the Karubar expedition, 24 species of barnacles had been collected from the Kei Islands and Aru Island by the Siboga expedition (Hoek 1913). Other pertinent reference works to the barnacles from these islands are Jones et al. (2001) and Jones and Hosie (2016), who recorded 15 species from the Kei Islands and Aru Island. In addition to the works of Hoek (1913), Jones et al. (2001) and Jones and Hosie (2016), Broch (1931-1932 reported on 67 species of barnacles collected by the Danish expedition to the Kei Islands (1922) and deposited in the Zoological Museum of Copenhagen University. In his report, only four species, Euscalpellum rostratum (Darwin, 1851), Lepas (Anatifa) anatifera Linnaeus, 1758, Conchoderma virgatum Spengler, 1789 and Acasta dentifer (Broch, 1922), were explicitly collected in the Kei Islands. The other barnacle species recorded were collected at other places along the route of this expedition, such as Lampung Bay, Krakatau, Java Sea, Sunda Strait, Makassar Strait, Tual, Banda Neira, Ambon and Saparua Bay. The lists of Hoek (1913), Broch (1931Broch ( -1932, Jones et al. (2001) and Jones and Hosie (2016) record a total of 25 species from the Kei Islands, Aru Island and Tanimbar Island. The results currently recorded herein reveal that 40 species are now recorded from these Islands. The present study and previous works on the barnacles of the Kei Islands, Aru Island and Tanimbar Island, especially the works of Hoek (1883Hoek ( , 1907Hoek ( , 1913, Broch (1922Broch ( , 1931Broch ( -1932, Buckeridge (1994Buckeridge ( , 1997, Jones et al. (2001) and Jones and Hosie (2016), enrich our knowledge of the barnacle fauna of these islands. This study demonstrates once more the value of museum collections as a resource in biodiversity science. The result of this study also strengthens the statement of Hoeksema (2007) that the Indo-Malayan region (which extends from East Indonesia to the Philippines and the Solomon Islands) is a centre of maximum marine biodiversity. Darwin (1854) demonstrated that this area had greater species richness than elsewhere in the world at the time. He named it the East Indian Archipelago (including the Philippines, Borneo, New Guinea, Sumatra, Java, Malacca and the eastern coast of India) and categorised it as his third province of barnacles. In this province, he found 37 barnacle species, the largest number known at that time, compared with the other provinces. Regarding the biodiversity of barnacles, the Indo-Malayan region as the centre of benthic biodiversity has not been replaced by other areas. In recent times, many studies and expeditions have been conducted in this area, revealing many more species of barnacles. For example, three expeditions have been undertaken within Philippine waters from 1976 until 1985 through MUSORSTOM Cruises and the collections the U.P. Marine Biological Laboratory at Puerto Galera, Oriental Mindoro (Rosell 1991;Chan 2009). Overall, the three of scientific cruises of MUSORSTOM collected 78 species of barnacles, 43 of which are new records and 12 species are new to science (Rosell 1991). Through the Philippine Panglao expedition (2005), Chan (2009) has also increased the number of barnacles from the Philippines, reporting 20 barnacle species with two new to science. Similar to the Philippine waters, eastern Indonesian waters also have a high diversity of barnacles. Recently, it has been revealed that the Moluccan Islands in eastern Indonesia have 97 species of barnacles, 23 of which are new records and two species are still awaiting their species descriptions (Pitriana et al. 2020). Furthermore, this number will increase with the results of the study of the barnacles from Karubar expedition (1991) that have revealed 40 species of barnacles. The results of the studies of barnacles from the Philippines and eastern Indonesian waters reconfirm the Indo-Malayan region as the epicentre of marine biodiversity.
6,280.4
2020-09-28T00:00:00.000
[ "Biology", "Environmental Science" ]
Data for training and testing radiation detection algorithms in an urban environment The detection, identification, and localization of illicit nuclear materials in urban environments is of utmost importance for national security. Most often, the process of performing these operations consists of a team of trained individuals equipped with radiation detection devices that have built-in algorithms to alert the user to the presence nuclear material and, if possible, to identify the type of nuclear material present. To encourage the development of new detection, radioisotope identification, and source localization algorithms, a dataset consisting of realistic Monte Carlo–simulated radiation detection data from a 2 in. × 4 in. × 16 in. NaI(Tl) scintillation detector moving through a simulated urban environment based on Knoxville, Tennessee, was developed and made public in the form of a Topcoder competition. The methodology used to create this dataset has been verified using experimental data collected at the Fort Indiantown Gap National Guard facility. Realistic signals from special nuclear material and industrial and medical sources are included in the data for developing and testing algorithms in a dynamic real-world background. Measurement(s) gamma ray photon detection events • radiation detection data Technology Type(s) Monte Carlo particle transport model • computational modeling technique Sample Characteristic - Environment city Sample Characteristic - Location State of Tennessee Measurement(s) gamma ray photon detection events • radiation detection data Technology Type(s) Monte Carlo particle transport model • computational modeling technique Sample Characteristic - Environment city Sample Characteristic - Location State of Tennessee Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.12654065 Background & Summary The US government performs radiation detection, identification, and localization campaigns for a variety of scenarios, including emergency response, large public gatherings (e.g., concerts and sporting events), and political events (e.g., presidential inaugurations). These radiation detection campaigns are generally conducted by trained teams equipped with radiation detection systems that can be carried by hand, mounted on an automobile, or mounted on unmanned robotic systems. The NaI(Tl) scintillation detector is one of the most commonly used radiation detectors because of its high gamma ray photon detection efficiency and relatively low cost. The crystal used in these detectors can also be easily manufactured in a variety of shapes and sizes [1][2][3] . Two primary streams of information can be used to analyze the signal from a NaI(Tl) detector. The first is the gross count rate, which is simply the number of photons detected in the sensor divided by time. The second exploits the response of an NaI(Tl) detector, which is proportional to the energy deposited by each interacting gamma ray. From this information, a histogram of detected photon energies called the gamma ray spectrum can be created. Because gamma rays emitted by different radioisotopes exhibit characteristic, discrete energies, the gamma ray spectrum can potentially be used to identify the radioisotope(s) detected. In addition, techniques based on the gamma ray spectrum may be less sensitive to background fluctuations, enabling the detection of radionuclides in highly dynamic backgrounds. To rapidly determine if an illicit source of radiation is present, these radiation detection systems are often equipped with automated radiation detection, radioisotope identification, and/or source localization algorithms that alert the operators to potential events that need further investigation. These algorithms can operate on the gross count rate, the gamma ray spectrum, or a combination of both data streams. One of the main challenges with the design of these algorithms is the highly dynamic nature of the background radiation environment. Naturally occurring radioactive material (NORM), which is primarily comprised of 40 K, 238 U, 232 Th, and the radioactive daughter products of the latter two isotopes, is present in different natural and man-made materials at www.nature.com/scientificdata www.nature.com/scientificdata/ varying concentrations and relative isotopic ratios. This variation in both absolute and relative NORM concentration in different materials means that both the gross count rate and spectroscopic signal read by a detector when moving throughout a search area can change dramatically from one location to another [4][5][6][7] . This is especially true in urban environments, where the composition of buildings and their resulting radioactive signatures is varied (e.g., a granite building may be placed directly next to a concrete building) 8 . Further, radiation interacts with different materials through a variety of physical mechanisms, exacerbating the dynamics of the measured radiation background signal. All of the aforementioned factors contributing to the dynamic background signal can cause radiation detection algorithms to produce false alarms, which take valuable time to investigate 9,10 . Further, if the false positive rate of the algorithm is too high, the operators may become complacent with regards to these alarms which reduces the probability of detecting a true alarm. In many cases, the false positive rate can be lowered by raising the detection threshold, which often comes at the cost of lowering the true positive rate. This is obviously not ideal, as not detecting a real illicit radioactive source could have catastrophic consequences. Overall, the ideal algorithm would have a very high true positive rate and a very low false positive rate. In the real world, however, a balance between true positive rate and false positive rate must be identified for the mission based on the specifics of said mission. To provide a dataset with high quality labels to develop and evaluate radiation detection, identification, and localization algorithms, Monte Carlo particle transport models were used to simulate the response of a 2 in. × 4 in. × 16 in. NaI(Tl) gamma ray detector moving through an urban environment. Figure 1 illustrates an example output from model showing street geometry with corresponding mapping of gamma ray flux resulting from three different sources placed in different locations. The data are simulated from a simplified city street model-a street without parked cars, pedestrians, or other "clutter. " The model includes a constant search vehicle speed with no stoplights, and no vehicles arepresent around the search vehicle. The search vehicle itself is not in the model. Instead, the detector is traveling down the street alone at a vertical height of 1 m above the ground. This simple www.nature.com/scientificdata www.nature.com/scientificdata/ model provides a starting point for comparing detection algorithms at their most basic level. For a detailed overview on how the data were generated, see ref. 11 . The parameters used to develop this model are controlled and known with absolute certainty, which is an extremely difficult condition to obtain in real-world experimental data. This level of ground truth is required to accurately assess the performance of radiation detection, identification, and localization algorithms, making this dataset a valuable asset to the algorithm development community. Further, for data-driven algorithms, such as neural networks and support vector machines in the machine learning field, high-quality labels are extremely important for training these algorithms to produce the desired results. Without mislabeled data and other outliers in the training set, algorithm developers can spend less time and effort developing mitigation strategies for these problems. This dataset was originally developed for the "Urban Nuclear Detection Challenge" data competition hosted on the Topcoder platform 12 , which was held from March through May 2019. The data package contains the data used in the competition, along with a scoring algorithm that was used during the competition to judge the performance of algorithm result entries 13 . Further descriptions of the data and scoring algorithm are presented in their respective sections. The competition results were based on the testing dataset, which was bifurcated into private scores (used for ranking and only given to the competitors at the end of the competition) and public scores (shown to everyone during the competition). This dataset has also been used to develop and test novel data-driven radiation detection algorithms, such as the autoencoder radiation anomaly detection (ARAD) algorithm before testing them on real-world data 14 . This dataset allowed the designers of the ARAD algorithm to evaluate new algorithms on well-controlled data and develop performance metrics, such as minimum detectable activity, receiver operator characteristic curves, and probability of detection curves 14 . Methods This dataset was generated using the Monte Carlo particle transport models SCALE/MAVRIC 11,15 alongside a suite of custom automated data processing scripts written in Python. The physical models used in the Monte Carlo simulation consist of seven interchangeable city blocks, each containing a combination of buildings, sidewalks, streets, parking lots, and grassy fields. Each block is composed of some combination of asphalt, soil, concrete, granite, and brick. Each run in the dataset is a simulation of the detector response of a 2 in. × 4 in. × 16 in. NaI(Tl) gamma ray detector moving down a lane of traffic at a constant speed. To provide a realistic background response variability in the data, the concentration of the NORM components in each material for each block was varied between different runs. In addition, instead of just one model, eight model instances were used, each with a different city block stacking configuration. To increase the difficulty to simply "learn" the geometry of each model, detector starting and ending locations are also varied and four lanes of travel are used (the detector can move in either direction through each model). Six threat sources are present in some of the datasets, both with and without 1 cm of lead shielding: 60 Co, 99m Tc, 131 I, highly enriched uranium, weapons-grade plutonium, and 99m Tc + highly enriched uranium. The gamma ray spectral templates for both the unshielded and shielded scenarios are shown in Fig. 2. The physical locations and activity of each source varies between runs. A total of 15 source locations were used, spread among all blocks, each with a different offset from the road and varying amounts of environmental shielding. More information on the development of the Monte Carlo model and how detector response data was generated is discussed in detail in ref. 11 . The end result of this effort is thousands of data files resembling typical radiological search data collected on urban streets in a midsized US city. Each data file simulates the pulse trains of gamma ray detection events (amount of energy deposited in the detector at a given time) received by a standard 2 in. × 4 in. × 16 in. NaI(Tl) detector driving along several city street blocks in a search vehicle at a constant speed between 1 and 13.4 m/s. The energy resolution for this detector is 7.5 % at 661 keV. Data are divided into two categories: a training set and a test set. In the competition, the competitors were only provided answers to the training set and submitted their results online for the test set. Based on their performance, a numeric public score was generated based on a portion of the test set. A separate private score was generated on the other portion of the test set, and this score was used to rank the final submissions from each competitor. Data Records The data can be found in ref. 13 . in a 13 GB gzip compressed tarball (29 GB uncompressed). This work is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0. To download the data, users must use either the Globus Connect Personal application to create a personal endpoint or a Globus endpoint server to download the data. Once downloaded, the tarball contains the following files: • README.pdf-A pdf document with an overview of the dataset • scorer-A directory containing a code to score training and test answers • sourceInfo-A directory containing threat source templates • submittedAnswers.csv-A template solution.csv file for the test set • testing-A directory containing all test datasets • training-A directory containing all training datasets • trainingAnswers.csv-A template solution.csv file for the test set www.nature.com/scientificdata www.nature.com/scientificdata/ The training and testing directories each contain data files in comma-separated value format (.csv extension). Each file has two columns; the first column represents the time between photon detection events in the detector in units of microseconds, and the second column is the energy of the detected photon in units of kiloelectron volts. The scorer directory contains a Python 3 code to generate a score based on the solution templates (solution_ training.csv and solution_testing.csv in the scorer directory). This code uses training and test set answers keys (answerKey_testing.csv and answerKey_training.csv) to generate training and test scores. technical Validation The accuracy of the modeling and detector response generation methodology has been studied using a radiation transport test bed of the Fort Indiantown Gap National Guard facility in Pennsylvania 16,17 . Using measurements at this facility, modeled detector response functions have been compared with experimental data for both background only and source plus background simulations 18 . Usage Notes Each of the comma separated value (CSV) data files in the training and testing directories contains data in the list mode format with the first column representing the time since last detection event in units of microseconds and the second column representing the energy of the detected photon in units of keV. This data format gives the user a high level of flexibility that allows the algorithm developer to format the data into a variety of common radiation data formats. The gross count rate in the detector can be extracted from the list mode data by summing www.nature.com/scientificdata www.nature.com/scientificdata/ the number of total detection events (counts) over a designated period of time. Likewise, gamma ray spectra can be generated by creating a histogram of the detected photon energies over a designated time and energy window. Another potential data format that may be useful to certain algorithms is the photon count rate in the detector for specific energy regions of interest. As stated in the Background & Summary section, gamma rays are emitted from specific radioisotopes and exhibit energy values characteristic to those isotopes. By summing the number of counts over time within a specific energy band, the count rate for only a select photon energy window can be obtained. Some algorithms, particularly those based on neural networks designed for image analysis, can observe the gamma ray spectrum as it evolves over time in the form of an image. One way to achieve this is to stack gamma ray spectra integrated over a period of time in the form of a 3D waterfall plot. Like most image analysis tasks, normalizing the images using min-max normalization would likely be beneficial. Code availability The position-and energy-dependent flux data were generated with MAVRIC, a serial-only code in the SCALE 6.2 package 19 . Some custom codes were developed to make the mesh-based sources outside of MAVRIC, but these codes are not available. However, the methods used in these custom codes have been adopted by Shift, a new parallel Monte Carlo code, which will be released in the next version of SCALE. Shift will be able to use the same geometry and materials as MAVRIC and perform similar calculations.
3,442.6
2020-10-05T00:00:00.000
[ "Environmental Science", "Engineering", "Physics" ]
The impact of CBP expression in estrogen receptor-positive breast cancer Background The development of new biomarkers with diagnostic, prognostic and therapeutic prominence will greatly enhance the management of breast cancer (BC). Several reports suggest the involvement of the histone acetyltransferases CREB-binding protein (CBP) and general control non-depressible 5 (GCN5) in tumor formation; however, their clinical significance in BC remains poorly understood. This study aims to investigate the value of CBP and GCN5 as markers and/or targets for BC prognosis and therapy. Expression of CBP, GCN5, estrogen receptor α (ERα), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) in BC was analyzed in cell lines by western blot and in patients’ tissues by immunohistochemistry. The gene amplification data were also analyzed for CBP and GCN5 using the publicly available data from BC patients. Results Elevated expression of CBP and GCN5 was detected in BC tissues from patients and cell lines more than normal ones. In particular, CBP was more expressed in luminal A and B subtypes. Using chemical and biological inhibitors for CBP, ERα and HER2 showed a strong association between CBP and the expression of ERα and HER2. Moreover, analysis of the CREBBP (for CBP) and KAT2A (for GCN5) genes in a larger number of patients in publicly available databases showed amplification of both genes in BC patients. Amplification of CREBBP gene was observed in luminal A, luminal B and triple-negative but not in HER2 overexpressing subtypes. Furthermore, patients with high CREBBP or KAT2A gene expression had better 5-year disease-free survival than the low gene expression group (p = 0.0018 and p < 0.00001, respectively). Conclusions We conclude that the persistent amplification and overexpression of CBP in ERα- and PR-positive BC highlights the significance of CBP as a new diagnostic marker and therapeutic target in hormone-positive BC. Supplementary Information The online version contains supplementary material available at 10.1186/s13148-021-01060-2. Background Breast cancer (BC) is the most common type of malignancy among females accounting for approximately 2.1 million new cases and 0.6 million deaths reported in 2018 worldwide [1]. Management of BC depends largely on enhancing the outcome and survival of patients through early detection of the disease. The increased BC mortality during the past 25 years could be attributed to the high percentage of patients who are still diagnosed at advanced stages [2][3][4]. In addition, the cure rates of the currently available BC treatment modalities are highly dependent on the molecular subtype of the tumor and the stage at diagnosis, which, in some cases, do not result in satisfactory clinical outcomes [5]. Inherent and/or acquired resistance to the existing hormonal and non-hormonal BC therapeutics is the main reason for BC therapy failure [6,7]. The great advances in understanding the biology and pathogenesis of BC lead to the development of targeted Open Access *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>1 Sharjah Institute for Medical Research, University of Sharjah, Sharjah, United Arab Emirates Full list of author information is available at the end of the article BC therapeutics. Such therapeutics are targeting molecules such as the human epidermal growth factor receptor 2 (HER2), the phosphoinositide-3-kinase (PI3K), the vascular endothelial growth factor (VEGF), the epidermal growth factor receptor (EGFR), the programmed death-1 (PD-1), the poly (adenosine diphosphate-ribose) polymerase (PARP), or the cyclin-dependent kinases [8]. Despite this arena of BC therapeutics, resistance and disease relapse is still an issue in some cases. Thus, the search for new biomarkers with diagnostic, prognostic and therapeutic purposes is still needed to assist in the clinical management of BC patients [9]. Currently, the BC diagnosis and treatment decisions are mainly based on the expression of hormone receptors such as ER, PR and the expression status of HER2. Epigenetic modifications in cancer cells are now recognized to play an essential role in carcinogenesis and in the response of cells to cancer therapy. The development of epigenetic markers can therefore largely improve the outcome of advanced BC [10]. Acetylation of histone and non-histone proteins is an important epigenetic factor that regulates diverse biological processes related to DNA replication, transcription, DNA repair, cell growth and death [11]. The addition of an acetyl group to lysine residues is catalyzed by histone acetyltransferases (HATs), while this addition is reversed by the function of histone deacetylases (HDACs). Modification of the acetylation profile of proteins in cancer cells through mutations, overexpression, or dysfunction of these two families of enzymes is well known to contribute to the pathological program of carcinogenesis [12]. Besides, the reversible and dynamic nature of histone acetylation provides a therapeutic window of opportunity [13]. Therefore, it is essential to study the role of these epigenetic regulators in the context of tumorigenesis to find a suitable epigenetic factor serving as a biomarker as well as a therapeutic target. General control non-depressible 5 (GCN5) and CREBbinding protein (CBP) are HATs that are reported to play a key role in various types of cancers [12]. The overexpression of GCN5 has been reported in lung, colon, liver, endometrial cancers as well in Burkitt's lymphoma and glioma [14][15][16][17][18][19]. Indeed, it was found to exert an oncogenic role through the acetylation of oncoproteins like c-MYC, AIB1 and the translocated E2A-PBX1 [16,20,21]. It also plays a fundamental role in mediating diverse malignant processes such as cell cycle perturbations, cell migration and DNA damage repair [15,22,23]. On the other hand, several reports mentioned the involvement of the CBP in both tumor-suppression and oncogenesis pathways, which forms a paradox about the function of CBP in cancer [12,24,25]. The status of CBP in cancer was found to be diverse, linked to chromosomal translocation in acute myeloid leukemia, somatic mutations in ovarian cancer and overexpression in lung and colon cancers [26][27][28][29]. Currently, very few reports are available about the status of the CBP and the GCN5 in BC. A recent study suggested a role for CBP in the biology of triple negative BC [30]. Both HATs were reported previously in regulating the estrogen receptor signaling pathway that is implicated in breast carcinogenesis [31]. In particular, CBP/ p300 is a well-known coactivator that functions in stimulating the transcriptional activity of the ER to induce the expression of estrogen-response elements [32][33][34]. Also, CBP was previously investigated as a potential target in metastatic BC [35]. The aim of this study was to investigate the expression status of CBP and GCN5 in BC patients' tissues and BC cell lines compared to their normal counterparts. Also, to study the CBP and GCN5 expression profiles in different subtypes of BC and their association with the different clinicopathological parameters. The ultimate goal was to investigate the possibility of using CBP and/or GCN5 as markers and targets for BC prognosis and therapy. CBP and GCN5 expression in breast cell lines and their relationship with ERα and HER2 receptors expression Differential expression of CBP and GCN5 proteins in normal and malignant BC cells was investigated in an invitro model using a panel of nine BC cell lines with different ERα, PR and HER2 receptor status and two types of normal breast epithelial cells (Fig. 1a, b, Additional file 1: Fig. S1, Additional file 1: Table S1). Moreover, the relationship between baseline level of CBP and GCN5 and the status of ERα, PR, HER2 receptors expression of BC cell lines was also tested. The baseline level of CBP was higher in eight out of nine BC cell lines compared to the normal epithelial breast cells (Fig. 1a). Interestingly, there is a negative correlation between expression of HER2 and CBP (r = − 0.6295, p = 0.0347) (Fig. 1c, Additional file 1: Fig. S1). This is indicated by the high baseline level of CBP expression in the seven cell lines lacking HER2 overexpression (MCF7, T47D, BT-549, MDA-MB-231, MDA-468, BT-20 and HS578T) as well as by the low baseline level expression of CBP in the two cell lines overexpressing HER2 (BT-474 and SkBr3) (Fig. 1a). On the other hand, the expression of ERα is positively associated with a high baseline level of CBP (r = 0.6957, p = 0.0187) (Fig. 1d, Additional file 1: Fig. S1). This association is clear in cells not overexpressing HER2 (e.g., MCF7 and T47D); however, in ERα-positive, PR-positive cells which also overexpresses HER2, the effect of HER2 overexpression is predominant and the baseline level of CBP is low (e.g., BT-474 cells) (Fig. 1a, d). These results indicate a strong correlation between the receptor (ERα, PR and HER2) status of BC cells and the baseline level of CBP. It is noteworthy that the baseline level of CBP in most cell lines is time-(cell cycle phase) dependent. To cancel this effect, we seeded the same number of cells and we collected cells for protein extraction at the same time point for all tested cell lines. For the baseline level of GCN5, there is a general trend to be more expressed in BC cells than normal breast epithelial cells. Moreover and contrary to CBP, there seems no correlation between the baseline level of GCN5 and the expression of ERα, PR, or HER2 receptors in BC cells (Fig. 1b). To investigate the nature of the crosstalk between CBP and the expression of ERα and HER2 receptors, we investigated the effect of the chemical and biological inhibition of ERα and HER2 on the expression of CBP in BC cells (Fig. 2). To cancel the effect of time-dependent expression of CBP, we used a separate control for each studied time point. Downregulation of HER2 by siRNA in HER2-overexpressing cell lines (SkBr3 and BT474) significantly increased the expression of CBP in both cell lines (Fig. 2a, b, Additional file 1: Fig. S2a, b). Particularly, the highest level of increase in CBP (more than 10 folds) was observed at 96 h of HER2 downregulation. The same results were obtained upon chemical inhibition of HER2 by trastuzumab; however, the increase in the level of CBP was observed at earlier time points (24 and 48 h) (Fig. 2c, d). These results confirm the negative feedback between HER2 and CBP. Similarly, the biological and the chemical inhibition of ER-α resulted in overexpression Figure 4 shows that the downregulation of CBP resulted in a reduction in the level of ERα in both cell lines. Collectively, these results indicate a strong association between CBP and the expression of ERα and HER2 in which a clear direct relation exists between CBP and ERα. On the other hand, the relationship between CBP and HER2 expression is not straight forward like ERα. Expression of CBP and GCN5 in clinical breast tissue samples The expression of CBP and GCN5 was investigated in human BC tissues and normal breast tissues using the immunohistochemical (IHC) approach as described in consort diagram (Fig. 5). The characteristics of the Table 1. The majority of the included BC cases were diagnosed primarily as invasive breast carcinoma (IBC) of no special type (NST) with/without an associated in situ component. IBC NST constitutes the majority of the histological subtypes of BC [36]. Few cases were reported as ductal carcinoma in situ where no invasion component could be identified (Additional file 1: Table S2). Representative images of negative and positive expression of CBP and GCN5 are shown in Fig. 6a. The subcellular distribution of CBP protein showed its localization in the nuclei of normal and BC cells and less frequently detected in the cytoplasm. On the other hand, GCN5 protein is distributed in both nuclei and cytoplasm (Fig. 6a) (Fig. 6d). Since we are interested in ductal carcinoma, we selected the ductal carcinoma cases only for further analysis. Totally, 1863 samples were filtered and searched for alteration in CREBBP and KAT2A genes in terms of mutation, amplification, or altered gene expression (Fig. 6e). CREBBP gene is altered in 115 (6%) of 1863 queried samples, and all the alterations were of the amplification type (Fig. 6f ). On the other hand, KAT2A was amplified in 29 (2%) of queried patients (Fig. 6g). These results indicated that CBP and GCN5 are overexpressed in breast tumors as well as their genes were amplified in some BC patients. The level of CBP expression correlates with ERα and PR protein expression in patient samples Next, we investigated the expression of CBP and GCN5 with respect to BC subtypes in our patients' samples ( (Fig. 7a). This supports the same finding in our cohort of patients' samples. However, KAT2A gene was observed to be significantly amplified in luminal B and HER2-overexpressing subtypes (Fig. 7a). Investigating the expression level of CBP and GCN5 in BC tissue samples with different receptors status revealed a significantly high level of CBP expression in the ERαpositive and PR-positive BC compared to ERα-and PR-negative tissue samples (p = 0.0001 and p = 0.0001, respectively) (Table 3), whereas the expression level of GCN5 did not show a significant correlation with the status of ERα or PR hormone receptors (p = 0.213 and p = 0.541, respectively) ( Table 3). Additionally, no significant correlation is found between the positive expression of both HATs (CBP and GCN5) with HER2 overexpression nor ki-67 status. In line with the results of tissue samples, data from publicly available BC databases showed a strong correlation between CBP amplification and ERα and PR status, whereby ERα-positive and PR-positive samples were associated with CREBBP amplification more than their corresponding ERα-and PR-negative samples (Fig. 7b, c). On the other hand, no Table 1 correlation was observed between KAT2A gene amplification and ERα or PR status (Fig. 7b, c). The strong association between CBP and ERα was further confirmed from the publicly available BC datasets of ERα-positive BC patients who received hormonal therapy. Correlation analysis showed that a high percentage of BC patients have amplification in CREBBP gene after receiving the hormonal therapy (Fig. 7d). This supports the in-vitro results of a positive correlation between CBP and ERα. Relationship between CBP and GCN5 expression and clinicopathological features and survival of BC patients To check the clinical significance of CBP and GCN5 expression in BC patients, we investigated the clinical characteristics of the patients and histopathological parameters of the tumors of our BC patient cohort, e.g., age at diagnosis, tumor type and its histological grade, lymph node status and the TNM stage of breast carcinomas as well as the survival of the patients. However, no significant associations were found between the CBP or GCN5 expression and any of the studied clinicopathological parameters (p > 0.05) (Additional file 1: Table S3). The influence of CBP and GCN5 expressions on the overall survival (OS) or disease-free survival (DFS) of BC patients was investigated using a log-rank test (Fig. 8). The analysis showed no significant correlation between CBP or GCN5 expression and the 5-year DFS (p = 0.630 and 0.351 for CBP and GCN5, respectively) ( Fig. 8a, b). Similarly, no clear impact was found for high CBP nor high GCN5 expression on the OS and the DFS of BC patients (p = 0.601, 0.670 for OS and DFS, respectively) (Fig. 8c, d). Also, the level of expression of the two proteins (CBP and GCN) did not correlate significantly to the DFS in the different BC types (Additional file 1: Fig. S11, Additional file 1: Fig. S12). However, the analysis of the publicly available data of CREBBP and KAT2A genes expression in a larger number of patients showed significant correlation with regard to the 5-year DFS ( Fig. 8e-g). Kaplan-Meier Plotter online tool (http:// kmplot. com/) was used to examine the survival in BC patients. Patients were divided into two groups (low and high expression) according to the mRNA expression of the given genes. Patients with high CREBBP gene expression had better 5-year DFS rates than the low gene expression group (p = 0.0018) (Fig. 8e). Similarly, the high KAT2A gene expression correlated significantly with better 5-year DFS of BC patients (p < 0.00001) (Fig. 8f ). Analyzing the OS showed significant differences between groups of patients with amplified CREBBP, KAT2A or no amplification of both genes (p = 0.025), whereby CREBBP amplification is associated with better OS and KAT2A gene amplification with worse OS (Fig. 8g). Discussion Although great advances have been made in the management of early stages BC, a considerable fraction of patients might progress into metastatic BC [38]. Despite the different therapeutic options available for the treatment of metastatic BC (such as endocrine therapy, tyrosine kinase inhibitors, growth factors antagonists, PARP inhibitors and conventional chemotherapy), advanced metastatic BC is considered incurable [39]. Histone acetyltransferases (HAT) regulate many cellular processes by modifying the acetylation status of histone and non-histone proteins and by acting as transcriptional co-activators [40]. Thus, reporting aberrant activity and/or expression of different HATs in many diseases including cancer is not surprising. The main aim of this study is to investigate the role of two HATs: namely CBP and GCN5 as diagnostic or prognostic markers in BC. Moreover, we aim at studying the relationship between these two HATs and the expression of ERα, PR and HER2 receptors in BC. This might help in more understanding of the pathogenesis of BC and hence developing new diagnostic and prognostic markers and new therapeutic targets. The loss of CBP was reported previously to be associated with the initiation of basal-type BC, which is known to be aggressive, resistant to anti-cancer drugs and with high mortality rate. This was attributed to the inability of breast cells to execute apoptosis upon loss of CBP [41]. In another study, CBP was found to be highly expressed in triple negative BC patients, an aggressive BC subtype and its overexpression correlated to positive lymph node metastasis but not with the overall survival [42]. The overexpression of the CBP paralog, p300, in breast carcinoma was previously reported, and it was evaluated as an independent biomarker for poor prognosis of BC patients [43]. Our results revealed high expression and/or amplification of CBP and GCN5 in BC compared to benign neoplasia samples and normal breast samples. Importantly, high CBP expression or amplification was correlated with the positive status of ERα and PR receptors and it was displayed more in Luminal A and Luminal B subtypes. This may reflect the value of CBP and GCN5 as diagnostic markers in BC. The higher degree of protein expression in carcinomas in the TMAs of our cohort compared to the relatively low number of genetic alterations in publicly available datasets may be explained by the fact that in some cases, non-detectable levels of gene expression had no effect on the levels of the detected protein expression, suggesting fast translation in the case of a short halflife or efficient translation from a small amount of mRNA in the case of a low level of mRNA. This is reported in the literature [44]. Moreover, the discrepancy between our results and the publicly available data regarding the correlation between the level of the two proteins and the patients' DFS or OS may be due to the fact that the survival analysis has been done on the proteins at different levels of gene expression (i.e., mRNA and protein) and each level could be differentially regulated, which might result in this variation. In addition, although datasets provide a valuable resource to test hypotheses for individual genes/signatures, there are variations in terms of size, patient characteristics and molecular composition of datasets and they do not necessarily reflect the studied cohort of BC patients. This is reported in the literature [45]. However, this indicates the need for more studies to test the value of these two proteins as prognostic markers in BC. Previously p300, a paralog of CBP, was reported as a bad prognostic marker in BC [43]. Although CBP and p300 have overlapping functions, pieces of evidences exist for unique roles and pathways [46]. The high level of CBP and GCN5 in BC cells/tissues might be a cause or a consequence of the malignant transformation. Their high level may enhance malignant transformation by increasing the activity of the growth-promoting genes (oncogenes) through enhanced acetylation of their promoter areas or through stimulating their activity by acting as transcriptional coactivators. This concept contradicts with the report of Dietze et al. who reported that loss of CBP in human mammary epithelial cells is associated with the inability of cells to execute apoptosis and increases the risk of basal-type BC [47]. On the other hand, a malignant transformation may increase the expression of CBP and GCN5 to enhance the expression of genes involved in processes such as angiogenesis, DNA repair, invasion and migration aiming to support the high level of division of malignant cells or to help them to accommodate for cellular stress. This point needs more deep investigations to understand the role of CBP and GCN5 in breast carcinogenesis and whether they are involved in the early stages of carcinogenesis or they are needed for the late events of building up a malignant tumor mass. We also report for the first time the existence of a reciprocal relationship between CBP and ERα and CBP and HER2. CBP is an established transcriptional coactivator of ERα; therefore, downregulation/chemical inhibition of ERα reduces the consumption of CBP and increases its free level. On the other hand, the downregulation of CBP might reduce the level of ERα by one or two ways; (1) reduction of the acetylation of the ERα gene promoter area and the subsequent reduction of ERα mRNA transcription, or (2) reduced level of CBP which is a transcriptional coactivator of ERα results in a reduction of the transcriptional activating activity of ERα and less ability to bind DNA with subsequent enhanced ERα degradation. Another hypothesis is that the CBP is controlling the expression of ERα (i.e., CBP acts upstream of ERα); therefore, when the level of CBP is low, the level of ERα will be low (as in Fig. 1d) and when ERα is inhibited (biologically or pharmacologically), the level of CBP will be increased to compensate (which is shown in Fig. 3). This hypothesis is confirmed in Fig. 4, whereby downregulation of CBP resulted in downregulating the expression of ERα. A similar relationship between ERα and CBP was reported previously whereby ligand-activated ERα induced reduction of the histone acetyltransferase activity of CBP [48]. Also, CBP is involved in estrogen receptor signaling through inducing its acetylation and enhancing its transcriptional-and DNA binding activities [49]. In addition, the public data analysis in our current study showed enhanced CREBBP gene amplification in tumor specimens from BC patients who received hormonal therapy. These clinical analyses support our in-vitro findings for the crosstalk between CBP and ERα. On the other hand, ER positivity was massively reported to be associated with better prognosis and survival outcomes in BC patients [50][51][52][53]. The positive correlation between CBP and ERα in the BC patients as indicated in this study proposes that the prognostic significance of CBP in BC could be similar to ERα and introducing CBP as a favorable prognostic biomarker. The increased expression of CBP upon HER2 downregulation by siRNA or inhibition by Trastuzumab suggests a negative effect of HER2 on CBP expression. This may be due to: (1) CBP is involved in HER2 signaling and inhibition of HER2 conserves the CBP and hence increases its level, (2) CBP is involved in HER2 expression by acetylating the promoter area of its gene and inhibition of HER2 results in an increased level of CBP to compensate, or (3) HER2 inhibition induces cellular stress and/or DNA damage and CBP level is enhanced as a response to this cellular stress. In this context, a previous investigation showed that the RAS-PI3K-AKT, a downstream pathway of HER2, targets CBP via the MDM2-dependent degradation [54]. Since ERα and HER2 signaling pathways are critical for BC progression and therapy, our report of crosstalk between CBP, ERα and HER2 emphasizes the role of CBP in BC. However, more work would be needed to understand the functional interaction between CBP, ERα and HER2 in BC. Conclusions In conclusion, we report the overexpression of CBP and GCN5 in BC cells/tissues more than the normal ones. The relationship between CBP and GCN5 expression and patients DFS or OS requires more investigations. Interestingly, a bidirectional crosstalk exists between CBP, ERα and HER2, which suggests the contribution of CBP Cell lines The BC cell lines were purchased from ATCC (VA, USA) (Additional file 1: Table S1) Protein extraction and western blot Analysis of the protein expression was performed as described previously [55]. Briefly, total cell lysates from breast cell lines were prepared in lysis buffer (20% SDS, glycerol, 1 M Tris (pH 6.8)) containing protease inhibitor cocktail (Sigma-Aldrich, USA). An equal amount of proteins (10 µg) were loaded and separated in 8% Table 1 and Additional file 1: Table S2. Tissue microarray construction Tissue microarray was constructed as described previously [56]. Tissue cores of 1.5 mm diameter were punched from selected regions of FFPE donor tissue blocks and embedded into recipient paraffin block using semi-automated arrayer (TMArrayer; Pathology Devices, MD, USA). Immunohistochemistry staining The immunohistochemistry for CBP and GCN5 was performed manually. Deparaffinization of the unstained sections was carried out by xylene followed by rehydration in a series of ethanol. Subsequently, the antigen retrieval was carried out using citrate buffer and heated for 5 min at 900 W followed by heating for 10 min at 750 W for two times. After cooling down, the slides were washed three times with PBS and the endogenous peroxidase was blocked for 10 min by 3% hydrogen peroxide. The slides were subsequently washed three times with PBS and blocked with goat serum for 45 min, followed by incubation with CBP (# sc-7300) or GCN5 (# sc-365321) primary antibody at dilution 1:100 (Santa Cruz Biotechnology, USA) at 4 °C overnight. On the next day, the slides were washed with PBS and incubated with rabbit secondary antibody labeled with biotin at dilution 1:50 for 30 min (Dako, USA) followed by the addition of diaminobenzidine substrate (Dako, USA) in combination with avidin-peroxidase complex solution. Finally, the slides were counterstained with hematoxylin, covered with aquatex and scanned by a digital microscope (Pannoramic DESK, 3D Histech, Budapest, Hungary). Immunohistochemistry interpretation Immunopositivity of CBP and GCN5 was assessed semiquantitatively by two independent observers to confirm the reproducibility of the results. The whole TMA cores in each tumor and non-tumor breast tissue were evaluated. The percentage of positively stained tumor cells (PP) and the staining intensity (SI) were determined. The immunoreactive score (IRS) was as follows: IRS = SI × PP, for each sample, as previously described [57]. The intensity was scored as follows: 0: No staining, 1: weakly positive, 2: moderately positive and 3: strongly positive. The percentage of positively stained cells was given the following scores: score 0: 0-1% positive cells, score 1: 2-20% positive cells, score 2: 21-50% positive cells and score 3: 51-100% positive cells. The IRS score thus ranged from 0 to 9, designated as negative for a score of 0 to 3, weakly positive for a score of 4 or 5, moderately positive for a score of 6 or 7 and strongly positive for a score of 8 or 9. Localization of the positivity was also determined: nuclear, cytoplasmic, or mixed. Publicly available cancer genomics and patients' data In order to explore the expression and the clinical significance of CREBBP (CBP) and KAT2A (GCN5) genes in BC patients, the publicly available database (https:// www. cbiop ortal. org/) was used to extract the clinical, pathological and omics data for each patient in the dataset. BC dataset (METABRIC, Nature 2012 and Nat Community 2016) was used, it includes 2509 BC patients [37]. Invasive breast carcinoma cases were selected for further analysis. In order to evaluate the prognostic value of CREBBP and KAT2A mRNA expression, Kaplan-Meier Plotter online tool (http:// kmplot. com/) was used to investigate the OS in BC patients. Patients were divided into two groups (low and high expression) according to the mRNA expression of the given genes. Statistical analysis GraphPad Prism 6 (GraphPad Software, USA) and SPSS statistics (IBM corporation, USA) were used for statistical analysis. For in-vitro experiments, the results are expressed as the means ± SEM of at least three independent experiments and the unpaired student t-test was used for statistical analysis. Association between CBP and GCN5 expression and clinical characteristics of the patients and histopathological parameters of the tumors were examined using the Pearson chi-square test. Kaplan-Meier analysis was used to generate survival curves. Log-rank tests were used to assess the differences between groups in overall survival (OS) and disease-free survival. p value < 0.05 was considered as statistically significant. Additional file 1. Supplementary Tables: Table S1. Molecular subtypes of breast cancer cell lines. Table S2. Histopathological features of tissue microarray samples. Table S3. Clinical-pathological parameters and CBP & GCN5 expression in DCIS and breast carcinoma cases. Supplementary Figures: Fig. S1. Baseline expression level of ER and HER2 in a panel of normal cancer breast cells. Fig. S2. Efficiency of transfection kinetics for a, b HER2 siRNA, c-e ER siRNA and f, g both ER and HER2 siRNAs. Fig. S3. Efficiency of transfection kinetics for a, b CBP siRNA in MCF7 and T47D cells. Fig. S4. Uncropped blots for a CBP and b GCN5 proteins in normal and cancer breast cells. Fig. S5. Uncropped blots for HER2 and CBP proteins in a SkBr3 and b BT-474 cells transfected with HER2 siRNA for 24-96 hours.
6,784.4
2021-04-07T00:00:00.000
[ "Biology", "Medicine" ]
Ex Vivo Exposure to Soft Biological Tissues by the 2- µ m All-Fiber Ultrafast Holmium Laser System : We present the results of ex vivo exposure by an ultrafast all-fiber Holmium laser system to porcine longissimus muscle tissues. A simple Ho-doped laser system generated ultrashort pulsed radiation with less than 1 ps pulse width and a repetition rate of 20 MHz at a central wavelength of 2.06 µ m. Single-spot ex vivo experiments were performed at an average power of 0.3 W and different exposure times of 5, 30 and 60 s, varying the total applied energy in the range of 1.5–18 J. Evaluation of laser radiation exposure was performed according to the depth and diameter of coagulation zones, ablation craters and thermal damage zones during the morphological study. Exposure by ultrashort pulsed radiation with an average power of 0.3 W showed destructive changes in the muscle tissue after 5 s and nucleation of an ablative crater. The maximum ablation efficiency was about 28% at the ablation depth and diameter of 180 µ m and 500 µ m, respectively. The continuous-wave radiation impact at the same parameters resulted only in heating of the near-muscular tissue, without ablation and coagulation traces. Exposure to tissue with an average power at 0.3 W of ultrashort pulsed radiation led, within 30 and 60 s, to similar results as caused by 0.5 W of continuous-wave radiation, although with less carbonization formation. CW radiation exposure, the ablation crater diameter was about 300 µ m, and dependence on exposure time was not observed. The depth of the coagulation zone, created by USP radiation with a power of 0.3 W for 60 s (18 J) and CW radiation with a power of 0.5 W for 60 s (30 J), was approximately the same. The diameter of the coagulation zone for such combinations of parameters differed slightly. Introduction In the last decade, there has been particular interest in fiber sources of the short-wave infrared spectral range (SWIR, 2-3.5 µm) due to a wide area of potential applications, including laser surgery and biomedicine [1]. Such lasers can be based on silicate, fluoride, chalcogenide, or tellurite fibers doped with rare-earth ions [2,3]. Fibers based on silicate glasses are the most successful and widely used, allowing the attainment of compact, all-fiber laser schemes with simple standard splices. At present, silica-based fibers doped with thulium (Tm 3+ ) or holmium (Ho 3+ ) ions have been extensively developed and used for various types of fiber laser systems in the 2-µm spectral range [4][5][6][7]. However, there is a limitation of the lasing wavelength in silicate glass fiber at about 2.2 µm, because the propagation losses at longer wavelengths are extremely high due to the value of the phonon energy (1100 cm −1 ) [8]. In [9], Holmen et al. have realized a Ho-doped fiber laser with a tunable lasing wavelength from 2.025 to 2.2 µm and with a maximum slope efficiency of 58% and 27%, respectively. These fibers, together with another glass matrix or crystalline one, are used to operate at longer lasing wavelengths [10]. The presence of strong water absorption makes 2-µm laser sources promising for medical applications [11], because a decrease in the depth of radiation penetration into water-saturated tissue would be expected, leading to their precision processing with less tissue heating. Some medical procedures, such as minimally invasive surgery, laser enucleation, skin treatment, etc. have the potential to be more accurate and reliable using SWIR sources [12,13]. Considering that one of the water absorption peaks is located at a wavelength of about 1.94 µm, the use of a holmium laser with its emission wavelength shifted into the infrared range allows one to vary the penetration depth of laser radiation [14,15]. It is worth noting that 2-µm laser radiation is eye-safe, which is also an important advantage for a number of applications [16]. The field of laser application in medicine is wide and is constantly expanding, from diagnostics and therapy to surgery, while having non-destructive and destructive effects on biological tissues [17,18]. Therefore, laser radiation parameters should be chosen based on the desired result in each specific case. It should be taken into account that the effect of laser radiation depends on the wavelength, power density, operation mode (continuouswave orpulsed), duration of exposure and pulse repetition rate, as well as on the optical properties of biological tissues [19,20]. Frequently, medical laser systems of the 2 µm spectral range operate in continuouswave (CW) and nanosecond or longer pulsed modes [21][22][23]. These laser systems are mainly based on linear absorption of laser radiation by tissues, due to the relatively low peak intensity of the radiation, which is insufficient to induce a noticeable nonlinear interaction at initial average powers. In this regard, photodamage resulting from exposure to such lasers is strongly wavelength-dependent and thermal in nature [24]. On one hand, this can have a selective effect on the tissue, but on the other hand, it can lead to a non-deterministic cutting effect in heterogeneous tissue and limit the efficiency in transparent or weakly absorbing tissues [25]. Moreover, in this case, the heat diffusion from the laser spot increases the collateral thermal damage to surrounding tissues, contributing to scarring. This fact limits the precision of laser radiation effects inside thick specimens. The development of ultrafast pulsed laser sources and amplifiers producing short picosecond or femtosecond pulses [26,27] has made it possible to improve surgical precision beyond the optical diffraction limit through new, mostly non-thermal, modes of tissue photodamage. Such a precise effect is achieved due to the nonlinear absorption that occurs when the incident power intensity is sufficient to induce simultaneous absorption of multiple photons [28]. Thus, the volume into which the laser energy is deposited decreases and, consequently, there is less damage to the surrounding tissues [29]. The first studies on the medical application of femtosecond laser pulses dealt with effects on the retina and skin [30,31]. In [32], Oraevsky et al. showed that ablation and the plasma optical breakdown threshold strongly depends on the tissue linear absorption at the laser focus for the nanosecond pulses, while for femtosecond pulses it is practically independent of the target tissue linear absorption. This, in turn, simplifies targeting of small individual cellular structures [33,34]. Thus, ultrashort pulse (USP) (10 fs-100 ps) sources allow more precise non-thermal exposure, in contrast to continuous-wave radiation [35,36]. Moreover, in [37] Amini-Nik et al. showed that the damage is noticeably less and the healing process without scarring is much faster after exposure to a picosecond infrared laser, in contrast to the conventional surgical laser or a scalpel. Thus, the use of this type of radiation can lead to a more comfortable environment for patients by reducing healing time and reducing the risk of infections during surgery. Significant progress has been made since the first use of lasers for medical purposes as an alternative to mechanical surgical instruments [38]. The design of laser systems has been greatly improved and changed towards compactness and simplicity. Due to the numerous advantages, particular attention is paid to the medical application of fiber lasers performing precise impact with minimal collateral damage to surrounding tissues [39,40]. Additionally, the use of fiber lasers simplifies the task of delivering laser radiation during minimally invasive operations using surgical endoscopes. In this paper, we present the results of ex vivo exposure on soft tissues while using an ultrafast all-fiber Ho-doped laser system and compare them with the results obtained earlier in [41] with a CW Ho-doped fiber laser under the same conditions and applied energy. A simple, unique Ho-doped laser system generated ultrashort pulsed radiation with less than 1 ps pulse width, with a repetition rate of 20 MHz at a central wavelength of 2.06 µm. The main objectives were to evaluate the depth and diameter of ablation (AZ), coagulation (CZ) zones and the heat-affected zone (HAZ) after ex vivo experiments of single-spot ablation of porcine longissimus muscle tissue by 2-µm ultrashort pulsed radiation with an average power of 0.3 W in the applied energy range of 1.5-18 J. Furthermore, we aimed to quantify the ablation efficiency (AE) for the Ho-doped fiber laser used in the study. Ex Vivo Tissue Preparation and Histological Procedures Porcine longissimus muscle tissue was used as an ex vivo tissue model. The choice of the biological tissue type is due to the sufficient study of its spectral properties. Based on the results of [10], we can predict the interaction character of 2-µm laser radiation with porcine muscle tissue. After preliminary cooling to 4 • C, porcine muscle tissue was cut into small specimens up to 5 mm thick. The specimens' temperature was restored to room temperature (22 • C) before laser exposure. The tissue surface was sprayed with a physiological solution to prevent its drying and dehydration during the experiment. Histological sections made with a microtome Microm HM 540 (Thermo Fisher Scientific Microm International Gmbh, Walldorf, Germany) perpendicular to the exposed specimen surface were analyzed using a confocal laser scanning microscope (Zeiss LSM 710 NLO, Zeiss, Jena, Germany) with a tunable wavelength (0.8-1.5 µm). A slice with 10 µm thickness was stained with a mixture of annexin 5/acridine orange-propidium iodide. The intensity of damaged tissue staining allowed us to identify coagulation zones, and qualitatively and quantitatively evaluate deep changes in tissue caused by laser action. The approximate size of the heat-affected zones was visually estimated immediately after laser exposure using the optical microscope MBS 12 with a magnification of 20×. Ultrafast Ho-Doped Fiber Laser System and Experimental Setup The unique all-fiber Holmium master oscillator power amplifier (MOPA) system was used as a source of ultrashort pulses. This system consisted of a hybrid mode-locked Ho-doped fiber laser and Ho-doped fiber amplifier, pumped by Yb-doped fiber laser [42]. The experimental setup of the Holmium MOPA system is presented in Figure 1a. The laser with ring cavity consisted of Ho-doped fiber (4 m long) and SMF-28 singlemode fiber (6 m long). The active fiber core diameter was 16 µm and cut-off wavelength of about 2 µm. The numerical aperture of Ho-doped fiber was 0.11. The holmium ion concentration in the fiber core was 5 × 10 19 cm -3 . The net cavity dispersion of the laser was estimated to be about −1.1 ps 2 . The laser was pumped through a 1.125/2.1 µm wavelength division multiplexer by a CW Yb-doped fiber laser emitting at 1.125 µm, with a maximum output power of up to 8 W. The absorption coefficient of the Ho-doped fiber at the pump wavelength of 1.125 µm was about 5 dB/m. To select one direction of radiation propagation, we used a fiber isolator specialized for 2 µm. The light was out-coupled from the laser cavity with a 10/90 fiber coupler, which allowed 90% of the optical power to be out-coupled. Hybrid mode-locking was realized by combining a fast and a slow saturable absorber in the laser cavity, namely the nonlinear polarization evolution effect and single-walled carbon nanotubes (SWCNTs). For this, a fiber polarizer and a pair of polarization controllers were placed in the laser cavity and SWCNTs were fixed between two angled polished fiber connectors FC/APC, which were placed after the isolator in order to reduce the power density. This laser produced 1 ps soliton pulsed radiation with a repetition rate of about 20 MHz at a central wavelength of 2.068 µm and average output power of 4.5 mW at pump power of 3.2 W. The output lasing spectrum is presented on the inset in Figure 1a. The Ho-doped fiber amplifier was used to increase the laser average output power up to the maximum value of 0.5 W. Pulsed radiation of the laser was directed to the Ho-doped fiber amplifier through an isolator which was used to suppress undesirable feedback. The counter-propagating pumping of the amplifier was carried out through a multiplexer by the CW radiation of Yb-doped fiber laser at a wavelength of 1.125 µm. The pump power varied from 0 to 5 W. We used 2 m of Ho-doped fiber with holmium ions concentration of 6.5 × 10 19 cm -3 as the active medium of the amplifier. The absorption coefficient at the pump wavelength (λ = 1.125 µm) was about 11 dB/m and the numerical aperture was 0.14. Figure 1b,c shows the lasing spectrum and autocorrelation trace at the output of the Ho-doped fiber amplifier corresponding to the average output power of 0.4 W, that corresponded to 0.3 W of power delivered to tissue. The pulse energy in this case was 15 nJ. Such a noticeable transformation of the shape of the lasing spectrum and pulse autocorrelation trace is due to the joint effect of SMF and Ho-doped fiber dispersion and nonlinearity on the USP radiation propagation. Increasing the pump power leads to the generation of higher-order solitons and then to the pulse decay on the Raman solitons [43]. Therefore, in Figure 1c, the autocorrelation trace contains three well-defined peaks, corresponding to decaying pulses. It is possible to estimate the width of the pulse central part as 250 fs from the intensity autocorrelation function. polished fiber connectors FC/APC, which were placed after the isolator in order to r the power density. This laser produced 1 ps soliton pulsed radiation with a repetitio of about 20 MHz at a central wavelength of 2.068 μm and average output power mW at pump power of 3.2 W. The output lasing spectrum is presented on the in Figure 1a. The Ho-doped fiber amplifier was used to increase the laser average output p up to the maximum value of 0.5 W. Pulsed radiation of the laser was directed to th doped fiber amplifier through an isolator which was used to suppress undesirable back. The counter-propagating pumping of the amplifier was carried out through a tiplexer by the CW radiation of Yb-doped fiber laser at a wavelength of 1.125 μm pump power varied from 0 to 5 W. We used 2 m of Ho-doped fiber with holmium concentration of 6.5 × 10 19 cm -3 as the active medium of the amplifier. The absorpti efficient at the pump wavelength (λ = 1.125 μm) was about 11 dB/m and the num aperture was 0.14. Figure 1b,c shows the lasing spectrum and autocorrelation trace output of the Ho-doped fiber amplifier corresponding to the average output power W, that corresponded to 0.3 W of power delivered to tissue. The pulse energy in th was 15 nJ. Such a noticeable transformation of the shape of the lasing spectrum and autocorrelation trace is due to the joint effect of SMF and Ho-doped fiber dispersio nonlinearity on the USP radiation propagation. Increasing the pump power leads generation of higher-order solitons and then to the pulse decay on the Raman so [43]. Therefore, in Figure 1c, the autocorrelation trace contains three well-defined corresponding to decaying pulses. It is possible to estimate the width of the pulse c part as 250 fs from the intensity autocorrelation function. Additionally, we carried out a comparison of pulsed and continuous-wave 2-μ diation effects on the same soft biological tissues. Ex vivo investigation of CW Ho-d Additionally, we carried out a comparison of pulsed and continuous-wave 2-µm radiation effects on the same soft biological tissues. Ex vivo investigation of CW Ho-doped fiber laser radiation exposure in a wide range of applied energy (1.5-66 J) at a wavelength of 2.1 µm on longissimus porcine muscle tissues was presented in our previous work [41], which also contains a detailed description of the CW Ho-doped fiber laser scheme, as well as the setup used in the experiment. A flexible single-mode fiber cable with a polished angle connector (FC/APC) was used for the convenient delivery of radiation in both laser systems. During experiments, we controlled the bending radius of the transport fiber to prevent undesirable losses. The experimental setup for delivering laser radiation to the specimens under study was a vertical stand with a focusing system attached to it ( Figure 2). Thus, the output radiation was focused through an optical objective with 8 × 0.2 NA (LOMO) and positioned perpendicularly to the surface of the biological tissue fixed on the moving platform for the focal distance adjustment. The laser spot diameter on the specimen surface was about 40 µm. Sci. 2022, 12, 3825 5 of 13 fiber laser radiation exposure in a wide range of applied energy (1.5-66 J) at a wavelength of 2.1 μm on longissimus porcine muscle tissues was presented in our previous work [41], which also contains a detailed description of the CW Ho-doped fiber laser scheme, as well as the setup used in the experiment. A flexible single-mode fiber cable with a polished angle connector (FC/APC) was used for the convenient delivery of radiation in both laser systems. During experiments, we controlled the bending radius of the transport fiber to prevent undesirable losses. The experimental setup for delivering laser radiation to the specimens under study was a vertical stand with a focusing system attached to it ( Figure 2). Thus, the output radiation was focused through an optical objective with 8 × 0.2 NA (LOMO) and positioned perpendicularly to the surface of the biological tissue fixed on the moving platform for the focal distance adjustment. The laser spot diameter on the specimen surface was about 40 μm. The duration of laser radiation exposure on the tissues was chosen as 5, 30 and 60 s, as in previous work, to make the results comparable. Considering the losses in the optical system for the USP laser, the value of power delivered to the tissue was 0.3 W. Thus, we ensured that the applied energy delivered to the specimens was the same for both CW and ultrafast laser systems, in order to accurately compare the results. In the case of the ultrashort pulsed laser, the range of applied energy varied from 1.5 to 18 J. The value of applied energy was calculated by multiplying the average radiation power by the exposure time [44]. Results and Discussion Figure 3a shows microphotographs of the porcine longissimus muscle tissue surface immediately after exposure of ultrashort pulsed radiation. For comparison, Figure 3b shows microphotographs after CW radiation action with the same power of 0.3 W. At the beginning of the exposure, the tissue surface is heated, which is accompanied by a change in the tissue color. Over time, the area of elevated temperature increases from the center to the periphery, forming a heat-affected zone (HAZ). In Figure 3, the edges of this zone are marked with a red line. The longer exposure time induces the evaporation of water contained in the near-surface tissue layers. As a result, an ablative crater (yellow line) can The duration of laser radiation exposure on the tissues was chosen as 5, 30 and 60 s, as in previous work, to make the results comparable. Considering the losses in the optical system for the USP laser, the value of power delivered to the tissue was 0.3 W. Thus, we ensured that the applied energy delivered to the specimens was the same for both CW and ultrafast laser systems, in order to accurately compare the results. In the case of the ultrashort pulsed laser, the range of applied energy varied from 1.5 to 18 J. The value of applied energy was calculated by multiplying the average radiation power by the exposure time [44]. Results and Discussion Figure 3a shows microphotographs of the porcine longissimus muscle tissue surface immediately after exposure of ultrashort pulsed radiation. For comparison, Figure 3b shows microphotographs after CW radiation action with the same power of 0.3 W. At the beginning of the exposure, the tissue surface is heated, which is accompanied by a change in the tissue color. Over time, the area of elevated temperature increases from the center to the periphery, forming a heat-affected zone (HAZ). In Figure 3, the edges of this zone Appl. Sci. 2022, 12, 3825 6 of 12 are marked with a red line. The longer exposure time induces the evaporation of water contained in the near-surface tissue layers. As a result, an ablative crater (yellow line) can be observed in the center of the laser spot. Further impact causes tissue darkening around the crater and its carbonization. During the exposure, we could clearly hear clicks, which can be attributed to the characteristic sound of steam bubbles collapsing. The existence of collapsing bubbles may indicate the occurrence of cavitation effects during the action of pulsed radiation. The bubbles' collapse could lead to tissue bursting and integrity disruption. The stress waves generated by pulsed laser radiation in absorbing materials, such as biological tissues, are due to the thermo-elastic effect [45]. Caused by the rapid temperature increase at the laser beam focus, thermoelastic stresses lead to maximum pressure increase in a finite volume of material [29]. For example, temperature rising at a wavelength of 0.8 µm and pulse duration of about 100 fs occurs over times of the order of several picoseconds, which is insufficient for acoustic relaxation [29,46]. Experimental investigation of cavitation bubble formation and its theoretical background by G. Paltauf et al. has shown that the photoacoustic damage mechanism should be especially taken into account when describing ultrashort pulsed laser radiation exposure to biological objects [47]. The reason is that the finite-size absorbing material volume experiences tensile stresses under the action of laser radiation, which leads to cavitation within the material and photomechanical damage [47]. In [48,49], it was shown that decreasing the pulse duration to picoseconds and femtoseconds values minimizes the thermal effects. Therefore, there is a decrease in the threshold energy for the optical breakdown (E th~√ τ i ). Appl. Sci. 2022, 12, 3825 6 of 13 be observed in the center of the laser spot. Further impact causes tissue darkening around the crater and its carbonization. During the exposure, we could clearly hear clicks, which can be attributed to the characteristic sound of steam bubbles collapsing. The existence of collapsing bubbles may indicate the occurrence of cavitation effects during the action of pulsed radiation. The bubbles' collapse could lead to tissue bursting and integrity disruption. The stress waves generated by pulsed laser radiation in absorbing materials, such as biological tissues, are due to the thermo-elastic effect [45]. Caused by the rapid temperature increase at the laser beam focus, thermoelastic stresses lead to maximum pressure increase in a finite volume of material [29]. For example, temperature rising at a wavelength of 0.8 μm and pulse duration of about 100 fs occurs over times of the order of several picoseconds, which is insufficient for acoustic relaxation [29,46]. Experimental investigation of cavitation bubble formation and its theoretical background by G. Paltauf et al. has shown that the photoacoustic damage mechanism should be especially taken into account when describing ultrashort pulsed laser radiation exposure to biological objects [47]. The reason is that the finite-size absorbing material volume experiences tensile stresses under the action of laser radiation, which leads to cavitation within the material and photomechanical damage [47]. In [48,49], it was shown that decreasing the pulse duration to picoseconds and femtoseconds values minimizes the thermal effects. Therefore, there is a decrease in the threshold energy for the optical breakdown (Eth~ ). By comparing the two laser operating modes at an equal average power of 0.3 W, we can clearly see the difference in the exposure results. In the case of CW laser radiation exposure, only a heat-affected zone is formed on the tissue surface (Figure 3b). On the other hand, the ultrashort pulsed laser radiation shows a completely different picture (Figure 3a). Coagulated tissue, which can be distinguished by a dark color at the site of exposure (purple line), is observed for the pulsed laser after the first 5 s of exposure. The optical properties of the coagulated tissue are changing during laser radiation exposure. In turn, this effect may lead to the increased absorption of radiation. Formation of an ablative crater and tissue carbonization was clearly observed with prolonged exposure. Morphological studies were carried out to determine damage zones more accurately and to identify the character of these zones. Obtained slices allowed the depth of laser energy penetration into the studied tissues to be traced, and qualitative and quantitative By comparing the two laser operating modes at an equal average power of 0.3 W, we can clearly see the difference in the exposure results. In the case of CW laser radiation exposure, only a heat-affected zone is formed on the tissue surface (Figure 3b). On the other hand, the ultrashort pulsed laser radiation shows a completely different picture (Figure 3a). Coagulated tissue, which can be distinguished by a dark color at the site of exposure (purple line), is observed for the pulsed laser after the first 5 s of exposure. The optical properties of the coagulated tissue are changing during laser radiation exposure. In turn, this effect may lead to the increased absorption of radiation. Formation of an ablative crater and tissue carbonization was clearly observed with prolonged exposure. Morphological studies were carried out to determine damage zones more accurately and to identify the character of these zones. Obtained slices allowed the depth of laser energy penetration into the studied tissues to be traced, and qualitative and quantitative evaluation of ablated tissues. Histological sections of tissue before the impact of laser radiation (shown in Figure 4a) were used to determine changes in tissues exposed to laser radiation. The image of the histological section (Figure 4b) after CW laser radiation with a power p = 0.3 W for t = 30 s confirms only superficial damage and the absence of irreversible damage to the cell structure (myocytes) of muscle tissue. This damage can be indicated as a heat-affected zone (3) of the tissue, which is reversible by its nature and could be repaired in the healing process. The tissue changes are accompanied by edema at the site of exposure and temporary cellular dysfunction. In contrast to the CW radiation, exposure to ultrashort pulsed radiation with the same average power of 0.3 W leads to completely different results (Figure 4c). After 30 s of exposure, there is an ablation zone (AZ), marked with a yellow line, where an ablation crater (1) with rough and charred edges has formed due to the evaporation of the intracellular fluid and the burning of the residual tissue. The ablation zone is followed by the coagulation zone (CZ), marked by the blue line. In turn, the coagulated tissue represents the burned, loose and compact layers formed along the path of laser beam penetration into the tissue depth. The carbonization of myocyte mineral components leads to the formation of a burnt edge (2A). Necrotized cells with vesicularly altered cytoplasm constitute a loose layer (2B). Finally in the coagulation zone, we distinguished a compact layer (2C), where the loss of water components in myocytes leads to their dystrophy. As in the case of CW laser exposure, a heat-affected zone (3) is observed. In both cases (Figure 4c,d), the HAZ zone is not marked with a line, since this zone is larger than the recorded area of the microphotographs. evaluation of ablated tissues. Histological sections of tissue before the impact of laser radiation (shown in Figure 4a) were used to determine changes in tissues exposed to laser radiation. The image of the histological section (Figure 4b) after CW laser radiation with a power p = 0.3 W for t = 30 s confirms only superficial damage and the absence of irreversible damage to the cell structure (myocytes) of muscle tissue. This damage can be indicated as a heat-affected zone (3) of the tissue, which is reversible by its nature and could be repaired in the healing process. The tissue changes are accompanied by edema at the site of exposure and temporary cellular dysfunction. In contrast to the CW radiation, exposure to ultrashort pulsed radiation with the same average power of 0.3 W leads to completely different results (Figure 4c). After 30 s of exposure, there is an ablation zone (AZ), marked with a yellow line, where an ablation crater (1) with rough and charred edges has formed due to the evaporation of the intracellular fluid and the burning of the residual tissue. The ablation zone is followed by the coagulation zone (CZ), marked by the blue line. In turn, the coagulated tissue represents the burned, loose and compact layers formed along the path of laser beam penetration into the tissue depth. The carbonization of myocyte mineral components leads to the formation of a burnt edge (2A). Necrotized cells with vesicularly altered cytoplasm constitute a loose layer (2B). Finally in the coagulation zone, we distinguished a compact layer (2C), where the loss of water components in myocytes leads to their dystrophy. As in the case of CW laser exposure, a heat-affected zone (3) is observed. In both cases (Figure 4c,d), the HAZ zone is not marked with a line, since this zone is larger than the recorded area of the microphotographs. We also compared the results described above with those obtained after continuouswave radiation action with a power of 0.5 W in the applied energy range of 2.5-30 J. Figure 4d shows a histological cross-section after 15 J to 0.5 W of CW radiation. Increasing the CW laser power and, consequently, the applied energy led to more destructive effects in the tissue. Therefore, we also observe the following three thermal damage zones: ablation, coagulation and heat-affected zone, as in the case of USP laser radiation. A comparison of the interaction effect between CW and USP radiation was described in [29]. Thus, for cell surgery, the power of an argon laser at 488 nm and 514 nm wavelengths should be more than 1 W [50], which exceeds the average power of a femtosecond laser at 800 nm wavelength by more than three times [34]. Vogel et al. explain this by the fact that interactions of CW radiation and pulsed radiation with pulse duration longer than 10 µs are based on linear absorption, whereas absorption at exposure with ultrashort pulses is nonlinear. Figure 5 compares the diameters of visible thermal damage (HAZ) for two laser modes of exposure, USP radiation of 0.3 W and CW radiation of 0.3 W and 0.5 W. Increasing the time of exposure leads to an increase in the HAZ diameter in both cases. Exposure of USP and CW radiation with an average power of 0.3 W for 5 s (1.5 J) resulted in HAZ diameters of 1000 µm and 750 µm, respectively. In the first case, tissue coagulation and the beginning of crater formation was observed, in contrast to the case of the CW laser. After 0.3 W of USP laser radiation exposure for 60 s (18 J), a HAZ diameter of about 2000-µm was measured, which is almost two times higher than that measured after the CW radiation exposure with the same power and time. It can be noted that exposure by continuous-wave laser radiation with a power of 0.5 W for 30 s (15 J) leads to comparable values of HAZ diameter to those measured after the pulsed radiation. Appl. Sci. 2022, 12, 3825 8 of 13 We also compared the results described above with those obtained after continuouswave radiation action with a power of 0.5 W in the applied energy range of 2.5-30 J. Figure 4d shows a histological cross-section after 15 J to 0.5 W of CW radiation. Increasing the CW laser power and, consequently, the applied energy led to more destructive effects in the tissue. Therefore, we also observe the following three thermal damage zones: ablation, coagulation and heat-affected zone, as in the case of USP laser radiation. A comparison of the interaction effect between CW and USP radiation was described in [29]. Thus, for cell surgery, the power of an argon laser at 488 nm and 514 nm wavelengths should be more than 1 W [50], which exceeds the average power of a femtosecond laser at 800 nm wavelength by more than three times [34]. Vogel et al. explain this by the fact that interactions of CW radiation and pulsed radiation with pulse duration longer than 10 μs are based on linear absorption, whereas absorption at exposure with ultrashort pulses is nonlinear. Figure 5 compares the diameters of visible thermal damage (HAZ) for two laser modes of exposure, USP radiation of 0.3 W and CW radiation of 0.3 W and 0.5 W. Increasing the time of exposure leads to an increase in the HAZ diameter in both cases. Exposure of USP and CW radiation with an average power of 0.3 W for 5 s (1.5 J) resulted in HAZ diameters of 1000 μm and 750 μm, respectively. In the first case, tissue coagulation and the beginning of crater formation was observed, in contrast to the case of the CW laser. After 0.3 W of USP laser radiation exposure for 60 s (18 J), a HAZ diameter of about 2000μm was measured, which is almost two times higher than that measured after the CW radiation exposure with the same power and time. It can be noted that exposure by continuous-wave laser radiation with a power of 0.5 W for 30 s (15 J) leads to comparable values of HAZ diameter to those measured after the pulsed radiation. For CW laser radiation with a power of 0.3 W, as well as for USP radiation with the lowest applied energy of 1.5 J (t = 5 s and a power of 0.3 W), formation of an ablation crater and tissue carbonization were not observed. Therefore, these values are not presented in Figure 6, which shows the dependency diagrams of the depth and diameter of the ablation (AZ) and coagulation (CZ) zones on the applied exposure energy for USP with a power of 0.3 W and CW with a power of 0.5 W radiation. As can be seen in Figure 6, the dependence of ablation depth on exposure time was weak for low applied energies, for both CW and USP radiation. The diameter of the ablation crater increased after USP radiation exposure. The maximum demonstrated ablation depth was 180 μm and the diameter was For CW laser radiation with a power of 0.3 W, as well as for USP radiation with the lowest applied energy of 1.5 J (t = 5 s and a power of 0.3 W), formation of an ablation crater and tissue carbonization were not observed. Therefore, these values are not presented in Figure 6, which shows the dependency diagrams of the depth and diameter of the ablation (AZ) and coagulation (CZ) zones on the applied exposure energy for USP with a power of 0.3 W and CW with a power of 0.5 W radiation. As can be seen in Figure 6, the dependence of ablation depth on exposure time was weak for low applied energies, for both CW and USP radiation. The diameter of the ablation crater increased after USP radiation exposure. The maximum demonstrated ablation depth was 180 µm and the diameter was 500 µm. For CW radiation exposure, the ablation crater diameter was about 300 µm, and dependence on exposure time was not observed. The depth of the coagulation zone, created by USP radiation with a power of 0.3 W for 60 s (18 J) and CW radiation with a power of 0.5 W for 60 s (30 J), was approximately the same. The diameter of the coagulation zone for such combinations of parameters differed slightly. 9 of 13 500 μm. For CW radiation exposure, the ablation crater diameter was about 300 μm, and dependence on exposure time was not observed. The depth of the coagulation zone, created by USP radiation with a power of 0.3 W for 60 s (18 J) and CW radiation with a power of 0.5 W for 60 s (30 J), was approximately the same. The diameter of the coagulation zone for such combinations of parameters differed slightly. Based on data reported in the literature, there are five main types of laser interaction with biological tissues. The main one is photothermal, which describes numerous effects [20]. Tissue temperature is an important parameter for this type of interaction. Understanding the change in local temperature will give insights into the effects that accompany laser exposure. Tunc et al. [51] confirmed in their studies that fast delivery of laser energy resulted in a sharp temperature increase, which led to the predominance of tissue vaporization over heat conduction to surrounding tissues and, as a consequence, more effective ablation and less thermal damage. On the contrary, a slow increase in temperature was accompanied by greater thermal damage. This is due to the fact that the processes of light absorption and heat generation are fast, in contrast to the heat distribution (dissipation). Therefore, temperature monitoring during laser surgery will help to reduce irreversible thermal damage and to reduce carbonization. For quantitative analysis of the single-spot experiments on muscle tissue ablation by ultrashort pulsed 2 μm radiation, we have calculated the values of the ablation area (AA), specifically the area of the ablation crate and the coagulation area (CA) as an area of the coagulation zone [51,52]. Both areas were measured on histological cross-sections using the publicly available software "ImageJ" (National Institutes of Health) [53]. The obtained data allowed us to find the ablation efficiency (AE), measured by the ratio of the ablation area to the total irreversible thermal damage area , % = × 100 . In addition, the calculated values for USP radiation were compared with the values for CW radiation and are shown in Figure 7. The ablation area (AA) obtained after exposure by USP radiation of 0.3 W (green bars) and CW radiation of 0.5 W (red bars) increased with the growth of the applied energy. The ablation efficiency (AE) of the USP laser radiation was about 27% for 18 J of applied energy while for the CW laser radiation it was lower by 5% for 15 J of applied energy and higher by 5% for 30 J. Thus, it can be noted that the ablation efficiency by ultra- Based on data reported in the literature, there are five main types of laser interaction with biological tissues. The main one is photothermal, which describes numerous effects [20]. Tissue temperature is an important parameter for this type of interaction. Understanding the change in local temperature will give insights into the effects that accompany laser exposure. Tunc et al. [51] confirmed in their studies that fast delivery of laser energy resulted in a sharp temperature increase, which led to the predominance of tissue vaporization over heat conduction to surrounding tissues and, as a consequence, more effective ablation and less thermal damage. On the contrary, a slow increase in temperature was accompanied by greater thermal damage. This is due to the fact that the processes of light absorption and heat generation are fast, in contrast to the heat distribution (dissipation). Therefore, temperature monitoring during laser surgery will help to reduce irreversible thermal damage and to reduce carbonization. For quantitative analysis of the single-spot experiments on muscle tissue ablation by ultrashort pulsed 2 µm radiation, we have calculated the values of the ablation area (AA), specifically the area of the ablation crate and the coagulation area (CA) as an area of the coagulation zone [51,52]. Both areas were measured on histological cross-sections using the publicly available software "ImageJ" (National Institutes of Health) [53]. The obtained data allowed us to find the ablation efficiency (AE), measured by the ratio of the ablation area to the total irreversible thermal damage area AE, % = AA AA+CA × 100 . In addition, the calculated values for USP radiation were compared with the values for CW radiation and are shown in Figure 7. The ablation area (AA) obtained after exposure by USP radiation of 0.3 W (green bars) and CW radiation of 0.5 W (red bars) increased with the growth of the applied energy. The ablation efficiency (AE) of the USP laser radiation was about 27% for 18 J of applied energy while for the CW laser radiation it was lower by 5% for 15 J of applied energy and higher by 5% for 30 J. Thus, it can be noted that the ablation efficiency by ultrashort pulsed radiation with lower average power (0.3 W) is comparable to the ablation efficiency by continuous-wave radiation with a higher power (0.5 W). Conclusions In this work, we studied the exposure of ultrashort pulsed laser radiation in the spectral range of 2-2.2 μm with a central wavelength of 2.06 μm to porcine longissimus tissue ex vivo. A simple, robust and compact all-fiber Ho-doped laser system generated ultrashort pulsed radiation with a duration of less than 1 ps and a repetition rate of 20 MHz. This type of ultrafast all-fiber Holmium laser system with a high pulse frequency and low energy for soft tissue irradiation has not yet been reported in the literature. After ex vivo single-spot ablation experiments on the muscle tissue in the range of applied energy 1.5-18 J, the depth and diameter of coagulation zones, ablation craters and thermal damage zones were evaluated. Comparison of USP laser exposure at an average power of 0.3 W with a CW laser at the same power and exposure time showed a difference in results. The continuous-wave laser radiation exposure led to reversible damage, namely the formation of a heat-affected zone on the tissue surface without destructive changes in the cell structure. The ultrashort pulsed radiation with the same average power caused more pronounced destructive tissue damage, which could be associated with the mechanism of laser-tissue interaction, and possibly nonthermal. The relative power safety for this USP laser exposure, resulting in non-destructive and reversible effects, is less than 1.5 J. Comparison of ablative efficiency has shown that USP laser ablation can be achieved at lower average power values compared to CW laser ablation. In addition, morphological studies have shown that the tissue at the edge of the formed crater during USP laser ablation was less susceptible to carbonization. It should be noted that tissue carbonization is often undesirable in surgical practice because charred tissue has a tendency to a longer healing period and, as a consequence, to a higher risk of infection [54]. We can assume that, as proposed in [36,55], cutting the high-repetition-rate pulse sequence and creating pulse bursts with a relatively low repetition rate could contribute to a decrease in thermal damage of tissues, and consequently a reduction of carbonization. The results obtained from ex vivo experiments with tissues of postmortem animals are part of a major study of the effect on biological tissues of two-micron laser radiation. Conclusions In this work, we studied the exposure of ultrashort pulsed laser radiation in the spectral range of 2-2.2 µm with a central wavelength of 2.06 µm to porcine longissimus tissue ex vivo. A simple, robust and compact all-fiber Ho-doped laser system generated ultrashort pulsed radiation with a duration of less than 1 ps and a repetition rate of 20 MHz. This type of ultrafast all-fiber Holmium laser system with a high pulse frequency and low energy for soft tissue irradiation has not yet been reported in the literature. After ex vivo single-spot ablation experiments on the muscle tissue in the range of applied energy 1.5-18 J, the depth and diameter of coagulation zones, ablation craters and thermal damage zones were evaluated. Comparison of USP laser exposure at an average power of 0.3 W with a CW laser at the same power and exposure time showed a difference in results. The continuous-wave laser radiation exposure led to reversible damage, namely the formation of a heat-affected zone on the tissue surface without destructive changes in the cell structure. The ultrashort pulsed radiation with the same average power caused more pronounced destructive tissue damage, which could be associated with the mechanism of laser-tissue interaction, and possibly nonthermal. The relative power safety for this USP laser exposure, resulting in non-destructive and reversible effects, is less than 1.5 J. Comparison of ablative efficiency has shown that USP laser ablation can be achieved at lower average power values compared to CW laser ablation. In addition, morphological studies have shown that the tissue at the edge of the formed crater during USP laser ablation was less susceptible to carbonization. It should be noted that tissue carbonization is often undesirable in surgical practice because charred tissue has a tendency to a longer healing period and, as a consequence, to a higher risk of infection [54]. We can assume that, as proposed in [36,55], cutting the highrepetition-rate pulse sequence and creating pulse bursts with a relatively low repetition rate could contribute to a decrease in thermal damage of tissues, and consequently a reduction of carbonization. The results obtained from ex vivo experiments with tissues of postmortem animals are part of a major study of the effect on biological tissues of two-micron laser radiation. The next stage will be in vivo experiments on animals using the laser systems studied in this work and the previous one [41].
10,460.8
2022-04-10T00:00:00.000
[ "Medicine", "Engineering", "Physics" ]
Direct measurement of ultrafast temporal wavefunctions The large capacity and robustness of information encoding in the temporal mode of photons is important in quantum information processing, in which characterizing temporal quantum states with high usability and time resolution is essential. We propose and demonstrate a direct measurement method of temporal complex wavefunctions for weak light at a single-photon level with subpicosecond time resolution. Our direct measurement is realized by ultrafast metrology of the interference between the light under test and self-generated monochromatic reference light; no external reference light or complicated post-processing algorithms are required. Hence, this method is versatile and potentially widely applicable for temporal state characterization. I. INTRODUCTION The temporal-spectral mode of photons offers an attractive platform for quantum information processing in terms of a large capacity due to its high dimensionality and robustness in fiber and waveguide transmission. To date, many applications using the temporal-spectral mode have been proposed and realized in quantum information processing fields such as quantum computation, quantum cryptography, and quantum metrology [1][2][3][4][5][6][7][8][9][10]. In these applications, the full characterization of quantum states, i.e., complex wavefunctions, is crucial for developing reliable quantum operations. In addition, temporal-mode characterization for high-speed and precise processing often requires ultrafast time resolution, such as on the subpicosecond scale. Several established methods, such as frequencyresolved optical gating (FROG) and spectral phase interferometry for direct electric field reconstruction (SPI-DER), are well known for measuring the temporalspectral mode of classical light [11]. These methods, however, utilize the nonlinear optical processes of the light under test, which are difficult to observe for weak light at the single-photon level. In recent years, various methods for characterizing the temporal-spectral mode of quantum light have been demonstrated, such as single photons and entangled photon pairs [12][13][14][15][16][17][18][19][20][21][22][23], and some have achieved ultrafast (subpicosecond) time resolution [12,13,16,[19][20][21][22][23]. While these methods differ in the details of their measurement procedures, they have a common procedure to reconstruct the form of the wavefunction: projective measurements for the entire tempo- *<EMAIL_ADDRESS>ral (or spectral or other basis) wavefunction have to be performed first, and then the measurement data is postprocessed, as shown in Fig. 1(a). In other words, even for acquiring only one part of the wavefunction, measurement of the entire wavefunction is essential. Each set of measurement data before post-processing contains partial information of the wavefunction but is not itself the wavefunction. As a more suitable measurement method for the form of the wavefunction, direct measurement [24] is the focus of this study. The direct measurement of a wavefunction ψ(t) is defined as the measurement that can reconstruct the complex value ψ(t 0 ) only using the measurement data at the point t = t 0 , as shown in Fig. 1(b); that is, the measurement data at t 0 directly correspond to the complex value ψ(t 0 ). Direct measurement was first demonstrated for the transverse spatial wavefunction of single photons [24] using a technique called weak measurement [25], and then for wavefunctions and density matrices in various degrees of freedom [26][27][28][29][30]. While direct measurement was introduced to give the operational meaning of the complex-valued wavefunction, it also provides a practical advantage of requiring only one measurement basis. Although direct measurement using weak measurement has drawbacks in its approximation error and low efficiency due to the nature of weak measurement, in recent years it has been reported that direct measurement can also be realized using strong (projection) measurement both theoretically [31][32][33] and experimentally [34,35]. Therefore, applying direct measurement using strong measurement to the temporal wavefunction of photons can provide a practical characterization method for temporal wavefunctions, which avoids the requirement of post-processing the measurement data of the entire wavefunction. In the direct measurement method, the measurement data at the time t0 directly correspond to the complex value ψ(t0) of the wavefunction ψ(t) at t0. In this paper, we propose a direct measurement method of temporal complex wavefunctions that can be performed for weak light at a single-photon level with subpicosecond time resolution. Our direct measurement is realized by ultrafast metrology (time gate measurement) of the interference between the light under test and the self-generated monochromatic reference light with several phase differences. This mechanism is simple compared to other measurement methods of the temporal-spectral mode of quantum light; that is, it does not require external reference light or complicated postprocessing of the measurement data. We also experimentally demonstrate this direct measurement method of the temporal wavefunction of light at a single-photon level and examine the validity of the measurement results. II. THEORY The proposed method for direct measurement of the temporal wavefunction is based on our previous study [33]. The wavefunction under test ψ(t) is the temporal representation of the pulse-mode state |ψ , and its spectral representationψ(ω) is given by the Fourier transform of ψ(t). ψ(t) can be represented by the product of the complex-valued envelope function ψ env (t) and the carrier term e −iω0t as ψ(t) = ψ env (t)e −iω0t , where ω 0 is the reference carrier frequency. We assume that ω 0 is known and then consider measuring ψ env (t) instead of ψ(t). The Fourier transform of ψ env (t),ψ env (ω), satisfies the relationψ env (ω) =ψ(ω + ω 0 ). To realize the above mechanism, our direct measurement method [33] uses a qubit (two-state quantum system) probe mode to prepare the four phase differences; we utilize the polarization mode of the photons spanned by the horizontal and vertical states |H and |V . We define the four polarization states as follows: diagonal The procedure of our direct measurement of the temporal wavefunction is shown in Fig. 2. Let the initial state be |Ψ 0 := |ψ |D = |ψ (|H + |V )/ √ 2. The temporal and spectral representations of |Ψ 0 are shown in Figs. 2(a) and (b), respectively. First, we extract the frequency ω = 0 component (the actual frequency is ω 0 ) from the horizontally polarized light using a polarization-dependent frequency filter. This operation is ideally described by the projection operator |ω 0 ω 0 | ⊗ |H H| +1 ⊗ |V V|, and the unnormalized state after the projection is given by Second, we perform projection measurements of time and polarization for |Ψ 1 . The projections onto D, A, R, L polarizations correspond to the preparations of the four phase differences 0, π, π/2, and 3π/2, respectively. The projection operator onto time t and polarization φ is described as |t t| ⊗ |φ φ|, and its projection probability is given by P (t, φ) = Ψ 1 |(|t t| ⊗ |φ φ|)|Ψ 1 / Ψ 1 |Ψ 1 . Using P (t, φ) for φ = D, A, R, and L, the real and imaginary parts of ψ env (t) are obtained as where ψ|ω 0 is a constant that does not depend on t and ω 0 |t = e iω0t / √ 2π. Here, we emphasize the following two points. First, our measurement method satisfies the definition of direct measurement mentioned previously. Indeed, to obtain the complex value of ψ env (t 0 ), this measurement method requires only the four projection probabilities P (t 0 , φ) (φ = D, A, R, L) at time t 0 . Second, our direct measurement method is more accurate and efficient than conventional direct measurement methods using weak measurement [24,[26][27][28][29][30]. Our measurement method causes interference between the signal and the self-generated uniform reference wave using the polarization-dependent frequency filter (projection measurement) instead of weak measurement. Therefore, our method can avoid the approximation error and low measurement efficiency associated with weak measurement. We note that the polarization degree of freedom, which is used to provide the four phase differences in the interference, can be replaced by another degree of freedom such as path mode when the polarization mode is already used or unstable for use. III. EXPERIMENTS We demonstrate the direct measurement of the temporal wavefunction using the measurement system shown in Fig. 3 [36]. We prepare the signal power in the following two conditions using the attenuator: the classical-light (CL) condition, in which the average photon number is 366 photons/pulse (4.69 nW); and the single-photon-level (SPL) condition, in which the average photon number is 0.58 photons/pulse (7.47 pW) and the probability of one or fewer photons per pulse is 0.885. The SPL condition is used to demonstrate that our direct measurement system works even for a signal as weak as a single-photon level. The 1560 nm beam then enters the 4-f system composed of the gratings (600 grooves/mm) and lenses (focal length f = 300 mm). At the center of the 4-f system, the spectral distribution is mapped onto the transverse spatial distribution, where state preparation followed by polarization-dependent frequency filtering is performed. As seen in Fig. 3(b), two beam displacers (BDs) are set in the 4-f system to divide the optical path according to the polarization; the polarization-dependent frequency filter is realized by inserting a slit (293 µm width) in one of the paths. In contrast, the state preparation before the slit is performed equally for the two beams. After the state preparation followed by polarization-dependent frequency filtering, the polarizations of the two beams are exchanged by the half-wave plate (HWP) and then combined by the second BD so that the two optical path lengths are equal. In the state preparation, we prepare the three types of states shown in Fig. 3(c). A variable slit with gap width w and displacement s is used to quantitatively evaluate the measured temporal wavefunction. The coverglass is used to cause a phase change. As the magnitude of the phase change depends sensitively on the inclination of the coverglass, we assume that this magnitude is unknown. The combination of the stripe mask and coverglasses is used to demonstrate the direct measurement of a complicated wavefunction. After the 4-f system, the beam is projected onto one of the D, A, R, or L polarizations by the HWP, quarterwave plate, and polarizing beam splitter. Subsequently, the beam is projected onto time t by the time gate measurement, which is realized by sum-frequency generation (SFG) of the signal beam and the 780 nm gate pulse with delay t. In SFG, these two beams are focused on the β-BaB 2 O 4 crystal by the lens (f = 50 mm), and their sum-frequency light (520 nm wavelength) is emitted at an intensity proportional to the product of the two input temporal intensities. By scanning the delay of the gate pulse t, sum-frequency light with an intensity proportional to the time intensity distribution of the signal light is extracted. Finally, the sum-frequency light is spatially and spectrally filtered to remove the stray light (not shown in the figure) and then detected by a single-photon counting module (Laser Components COUNT-NIR). For comparison, we additionally perform intensity (projection) measurements in time and frequency for the state under test in the CL condition. The state under test is extracted by the projection measurement onto V polarization from the output light of the 4-f system. The intensity measurements in time and frequency are realized by the time gate measurement and using an opti-cal spectrum analyzer (Advantest Q8384), respectively. The obtained temporal and spectral intensity distributions are used to examine the validity of the direct measurement results. We note that the spectral width δω extracted by the polarization-dependent frequency filter (1.08 THz FWHM) is not sufficiently small compared to those of the states under test generated by the slit or the stripe mask (∼ 6 THz). In this condition, the spectral wavefunction after the frequency filter should be approximated by the rectangle function rect(ω/δω), which is zero outside the interval [−δω/2, δω/2] and unity inside it. In this case, the right sides of Eqs. (3) and (4) are replaced by sinc(δωt/2)Re[ψ env (t)] and sinc(δωt/2)Im[ψ env (t)], respectively, where sinc(x) := sin(x)/x. To obtain Re[ψ env (t)] and Im[ψ env (t)], we make a correction by dividing the measured wavefunctions by sinc(δωt/2), which is independent of ψ env (t) and was determined by prior measurement. On the other hand, the time width of the gate pulses (79.2 fs FWHM) is considered to be sufficiently smaller than those of the states under test (∼ 3 ps). Hence, we assume here that the effect of the width of the time measurement can be ignored. The detailed calculation accounting for both the effects of the finite frequency and the time widths is given in Appendix A. In the following, we show the experimental results for state preparations (i)-(iii) in Fig. 3(c) and ω c = αs, respectively, where the proportional constant α := 2.41 THz/mm is derived from the geometrical configuration of our 4-f system. The temporal wavefunction obtained by Fourier-transforming rect[(ω − ω c )/∆ω] is e iωct sinc(∆ωt/2), and the time width ∆t between the two central zeros of this sinc function and the phase gradient κ are given by ∆t = 4π/∆ω = 4π/(αw) and κ = ω c = αs, respectively. Therefore, in this state preparation, the form of the temporal wavefunction can be controlled quantitatively by changing w and s. We display the 3D plot of the result of the direct measurement of the temporal wavefunction generated by the variable slit (w = 2.0 mm, s = 0.0 mm) in Fig. 4(a). There is no significant difference between the measurement results under the CL condition (lines) and those under the SPL condition (dots), while some fluctuation due to the shot noise is observed in the results in the SPL condition. The intensity (square of the amplitude) and phase distributions of the measured temporal wavefunction are shown in Fig. 4(b), and those in the frequency domain, obtained by Fourier-transforming the measured temporal wavefunction, are shown in Fig. 4(c). Furthermore, the temporal and spectral intensity distributions obtained by the time gate measurement and optical spectrum analyzer are displayed as green dotted lines in Figs. 4(b) and (c), respectively. The agreement of these intensity measurement distributions with the intensity distribution reconstructed from the directly measured wavefunction supports the validity of our direct measurement results. A quantitative comparison between them using classical fidelity is discussed at the end of this section. Next, we examine the change in the measured temporal wavefunction when the gap width w and displacement s of the variable slit are changed. All these measurements are performed in the CL condition. Figure 5(a) shows the direct measurement results of the magnitude of the temporal wavefunction when w is changed from 1.4 mm to 2.6 mm while s is fixed at s = 0 mm. The time widths ∆t of the measured temporal amplitude, which are obtained by fitting the sinc function A|sinc[2π(t − t c )/∆t]| to the measured curves, are plotted versus w in Fig. 5(b). The values are in good agreement with the theoretical curve ∆t = 4π/(αw) (black line). Figure 5(c) shows the direct measurement results of the phase of the temporal wavefunction when s is changed from 0.0 mm to 0.8 mm while w is fixed as w = 2.0 mm. The phase gradients κ of the measured temporal phase, which are also obtained by fitting the linear function to the measured curves in the range of t ∈ [3.75 ps, 5.75 ps], are plotted versus the displacement s in Fig. 5(d). These values are also in good agreement with the theoretical curve κ = αs + κ 0 (black line), where the offset value κ 0 := −0.11 ps −1 is determined from the phase gradient when s = 0 mm. We further demonstrate the direct measurement of the temporal wavefunction generated by the slit (w = 2.0 mm, s = 0.0 mm) with a coverglass and by the stripe mask with two coverglasses. The measurement results for the slit with a coverglass are shown in Figs. 6(a)-(c). It should be noted that the frequency wavefunction derived from the directly measured time wavefunction shows a stepwise phase change due to the phase added by the coverglass. The magnitude of the obtained phase step cannot be evaluated because its true value is not known in advance, as mentioned above. Nevertheless, the agreement of the spectral intensity distributions derived from the directly measured time wavefunction (red and blue lines) with the results of the frequency intensity measurement (green line) indicates that the characterization of the wavefunction by direct measurement is performed properly. Figures 7(a)-(c) show the measurement results for the stripe mask with two coverglasses, which have more complicated waveforms. In this case as well, the point to be noted is that the frequency wavefunction de- tributions of the wavefunctions obtained by the direct measurement and those obtained by the intensity (projection) measurement using the classical fidelity (Bhattacharyya coefficient). The classical fidelity is defined as j √ p j q j for two probability distributions {p j } and {q j }. Table I shows the classical fidelity between the intensity distributions obtained by the direct measurement and the projection measurements for panels (b) and (c) in Figs. 4, 6, and 7. We can see that these fidelities show high values close to 1. IV. DISCUSSION First, we describe the performance of the direct measurement system used in our experiment. The time resolution is determined by the time width of the gate pulse and the phase-matching bandwidth of SFG. In our case, the latter effect is negligible and the time resolution is 79.2 fs FWHM, which gives the subpicosecond resolution. On the other hand, the measurable range in the time domain is determined by the time width of the selfgenerated reference light in the shape of a sinc function. The time width between the two central zeros of the sinc function is 11.7 ps. Therefore, the dynamic range of our direct measurement system is evaluated to be 11.7 ps/79.2 fs = 148. Next, we remark on previous studies related to direct measurement of the temporal wavefunction. A recently reported experiment on δ-quench measurement [37] has demonstrated measurement of the temporal mode of light by applying instantaneous phase modulation followed by projection onto a specific frequency. Although this method differs from direct measurement using weak measurement [24] and our direct measurement method, it satisfies the definition of direct measurement of the temporal wavefunction. In this measurement, the time resolution did not reach the subpicosecond scale, and classical light much stronger than a single-photon level was used as the light under test. In addition, a temporal-mode measurement method reported over 30 years ago [38] also satisfies the definition of direct measurement. Although it was devised independently of the context of direct measurement, its configuration is similar to that of our direct measurement system. In this measurement, the time resolution reached the subpicosecond scale, while classical light was used as the light under test. As a characterization method of the temporal mode of classical light, this method is currently rarely used in contrast to other sophisticated methods such as FROG and SPIDER. However, the simple configuration of this method makes it suitable for the measurement of single photons, and the significance of our experiment is that it demonstrates this. V. CONCLUSION We proposed a direct measurement method for characterizing the temporal wavefunction of single photons and experimentally demonstrated the direct measurement for several test wavefunctions. The experimental results showed that the direct measurement method works at the single-photon level and can achieve subpicosecond time resolution. We clarified the validity of the direct measurement by quantitatively evaluating the measurement results when using the variable slit for state preparation and calculating the fidelities between the results of the direct measurement and the intensity distribution obtained by the projection measurement. This direct measurement method can be applied not only to the temporal-spectral mode but also to other degrees of freedom. In addition, it is expected that the direct measurement method can be extended not only to pure states but also to mixed states and processes; such an expansion of the scope of application of direct measurement is a subject for future research. Here, we describe the calculation of our direct measurement method when the effects of the finite resolution of the frequency filter and the time measurement are considered. The projection operator of the frequency filter with spectral width δω is given by ∞ −∞ dω rect[(ω − ω 0 )/δω]|ω ω|, where rect[(ω − ω 0 )/δω] is zero outside the interval [ω 0 −δω/2, ω 0 +δω/2] and unity inside it. The unnormalized resultant state after the polarization-dependent frequency filter is described as The time measurement implemented by optical gating is characterized by the positive-operator-valued measure ∞ −∞ dt ′ g t (t ′ )|t ′ t ′ |, where g t (t ′ ) is the non-negative gate function centered at t ′ = t. The probability P ′ (t, φ) that the results of the time and polarization measurements are t and φ, respectively, is described as Therefore, we obtain the following results: Assuming that ω|ψ is the constant value ω 0 |ψ in the interval [ω 0 − δω/2, ω 0 + δω/2], the integral with respect to ω can be calculated as and then we obtain Furthermore, when the temporal width of the optical gate is sufficiently small compared with that of ψ env (t), we can approximate g t (t ′ ) = δ(t − t ′ ) and thus obtain P (t, D) − P (t, A) ∝ sinc δωt 2 Re[ψ env (t)], P (t, R) − P (t, L) ∝ sinc δωt 2 Im[ψ env (t)]. (A8) We adopt these approximated results in the main text.
5,030.2
2021-03-01T00:00:00.000
[ "Physics" ]
Photophysical Characterization of Ru Nanoclusters on Nanostructured TiO2 by Time-Resolved Photoluminescence Spectroscopy Despite the promising performance of Ru nanoparticles or nanoclusters on nanostructured TiO2 in photocatalytic and photothermal reactions, a mechanistic understanding of the photophysics is limited. The aim of this study is to uncover the nature of light-induced processes in Ru/TiO2 and the role of UV versus visible excitation by time-resolved photoluminescence (PL) spectroscopy. The PL at a 267 nm excitation is predominantly due to TiO2, with a minor contribution of the Ru nanoclusters. Relative to TiO2, the PL of Ru/TiO2 following a 267 nm excitation is significantly blue-shifted, and the bathochromic shift with time is smaller. We show by global analysis of the spectrotemporal PL behavior that for both TiO2 and Ru/TiO2 the bathochromic shift with time is likely caused by the diffusion of electrons from the TiO2 bulk toward the surface. During this directional motion, electrons may recombine (non)radiatively with relatively immobile hole polarons, causing the PL spectrum to red-shift with time following excitation. The blue-shifted PL spectra and smaller bathochromic shift with time for Ru/TiO2 relative to TiO2 indicate surface PL quenching, likely due to charge transfer from the TiO2 surface into the Ru nanoclusters. When deposited on SiO2 and excited at 532 nm, Ru shows a strong emission. The PL of Ru when deposited on TiO2 is completely quenched, demonstrating interfacial charge separation following photoexcitation of the Ru nanoclusters with a close to unity quantum yield. The nature of the charge-transfer phenomena is discussed, and the obtained insights indicate that Ru nanoclusters should be deposited on semiconducting supports to enable highly effective photo(thermal)catalysis. ■ INTRODUCTION Due to an increasing energy demand and increasing amounts of greenhouse gases, interest in alternative fuel sources has increased dramatically in the past decades. 1,2 Specifically, photocatalysis has gained interest as a promising "green" method to produce renewable fuels. Typically, in photocatalysis, a semiconductor is used to harvest solar energy to drive chemical reactions. 1−5 Several recent studies have shown the promise of photoexciting metal nanoparticles to drive photocatalytic conversion at ambient conditions. 6−10 A relatively new field combining the strengths of heterogeneous catalysis and photocatalysis is photothermal catalysis. 11−15 Typically, metal nanoparticles are loaded on a metal oxide support, mostly in some form of TiO 2 . Importantly, the addition of photon energy to thermal energy enables us to (i) achieve significantly higher activities at relatively low temperatures and (ii) improve product selectivity by opening up new chemical reaction pathways, otherwise inaccessible. 16 One explanation for the effect of light is that conversion is preceded by reactant adsorption (similar to "classical" heterogeneous catalysis), followed by light-induced electron transfer into the lowest unoccupied molecular orbital (LUMO) of surface adsorbates (the reactant), which weakens chemical bonds and thus lowers the activation energy for chemical conversion. 17,18 Aside from these effects with the adsorbate, a variety of photoinduced processes can also occur between a metal nanoparticle and a metal oxide semiconductor onto which the particles are adsorbed, 19 with the excitation wavelength likely playing an important role. Visible excitation of Au nanoparticles has been reported to lead to ultrafast hot electron transfer into TiO 2 . 20 In the case of spectral overlap, Forster-type resonance energy transfer between the semiconductor and metal nanoparticle 8 or between metal nanoparticles 21 is also possible. Furthermore, it is essential to distinguish between few nanometer or smaller metal nanoclusters for which molecular-type electronic levels are well known 22−24 and larger nanoparticles with a size-dependent plasmon resonance energy. 25 Ultrafast spectroscopy is powerful to elucidate fundamental insights into light-induced mechanisms and dynamics. In our group, we have used time-resolved photoluminescence (PL) spectroscopy to understand the photodynamical processes in thin nanocrystalline anatase TiO 2 films in aqueous media at different NaCl concentrations and at different pH values, enabling us to discriminate between bulk and surface charge carrier processes. The PL of the latter is red-shifted and sensitive to the environment. We also observed a red shift in the PL spectrum with time following photoexcitation, indicating directional charge diffusion from the TiO 2 nanoparticle bulk toward its surface. 26 Furthermore, significant insight has been gathered for commonly used silver or Ag nanoparticles. Especially, the intense PL of Au nanoclusters and small nanoparticles has been studied intensively, 23,27 with the PL quantum yield increasing with a decreasing diameter, 28 while Ag nanoclusters are also well known for their PL. 29 However, for photothermal catalysis, one of the most effective nanoparticles consists of Ru. 30−32 Very few photophysical studies for this system exist, and mechanistic insight regarding potential light-induced interfacial charge-transfer phenomena with a semiconductor support and the role of UV versus visible photoexcitation is limited. Sample Preparation. Ru/TiO 2 and Ru/SiO 2 were prepared as follows: 0.103 g of RuCl 3 ·xH 2 O and 0.523 g of PVP were dissolved in 200 mL of methanol and 160 mL of water. Then, 1 g of either TiO 2 or SiO 2 was added to the solution. After vigorous stirring for 1 h, 0.185 g of NaBH 4 was added to the solution, yielding a color change into black. After stirring was continued for 2 h at room temperature, the temperature of the solution was elevated to 50°C and stirring was continued further for 2 h. Then, the precipitate was intensively washed multiple times with Milli-Q water. Finally, the as-obtained product was dried overnight at 90°C in air. Unloaded and Ru-loaded TiO 2 and SiO 2 were coated on quartz substrates through a drop-casting procedure. First, the quartz substrates were cleaned through ultrasonication in a bath of acetone for 15 min, followed by ultrasonication in a bath of water for 15 min. Then, the substrates were rinsed with H 2 O and blow-dried with N 2 . To increase adhesion, 36 the substrates were then treated for 30 min in a mixture of H 2 O, H 2 O 2 (30 wt %), and NH 4 OH (28.0−30.0% NH 3 basis) in a 5:1:1 ratio. Afterward, the quartz substrates were once more rinsed with water and put on a heating plate at 100°C. Before drop-casting, the powders were brought in an aqueous suspension with a concentration of 10 g/L. After sonication for 30 min, the suspensions were drop-cast on the quartz substrates. After drying, the samples were treated in an oven in air at 200°C overnight. As-prepared samples were stored in argon afterward. Characterization. Characterization of the samples took place using several techniques prior to the drop-casting step. To determine the dispersion and morphology of Ru on TiO 2 and SiO 2 , high-angular annular dark-field (HAADF) images were collected through scanning transmission electron microscopy (STEM) measurements, which were performed using an FEI cubed Cs corrected Titan. For elucidation of the oxidation state of Ru, X-ray photoelectron spectroscopy (XPS) measurements were performed using a PHI Quantes scanning XPS/HAXPES microprobe with a monochromatic Al Kα X-ray source (1486.6 eV). Diffuse reflectance spectroscopy was performed using the deuterium lamp of an Avantes AvaLight-DH-S-BAL light source. An Avantes AvaSpec-2048 spectrometer was used to determine the diffuse reflectance spectra of the different samples. BaSO 4 was used as a reference sample. The Kubelka−Munk plots F(R) were calculated from these diffuse reflectance spectra through the following formula 37 where R is the measured reflectance. These Kubelka−Munk plots correlate with the absorbance spectra of the samples. Finally, to elucidate the difference in crystallinity between the TiO 2 used in this study compared to TiO 2 used in our previous study, 26 we performed X-ray diffraction (Bruker D2 Powder) using the Cu Kα line under an accelerating voltage of 30 kV. Time-Resolved PL Experiments. The experimental setup used for photoluminescence experiments has been described in detail in previous work. 26 Briefly, the output of a Fianium laser (FP-532-1-s, 532 nm center wavelength, 300 fs pulse duration, and 80.37 MHz repetition rate) was used as a light source. For experiments performed with λ exc. = 532 nm, the output was attenuated to 25 mW. For experiments with λ exc. = 267 nm, a second harmonic UV signal was generated by focusing 700 mW into a 3 mm thick β-BaB 2 O 4 crystal (Newlight Photonics) using a 20 cm focal length quartz length and recollimated after the second harmonic generation using a 20 cm focal quartz length. The output was sent by three dichroic mirrors (Thorlabs, MBI-K04) through an FGUV11-UV filter (Thorlabs) to remove the residual 532 nm component to the sample. The λ exc. = 267 nm and λ exc. = 532 nm experiments were performed using a power of 27 and 8.2 μW, respectively. The sample was kept in a sealed fluorescence quartz cuvette (101-The Journal of Physical Chemistry C pubs.acs.org/JPCC Article QS, Hellma Analytics, 10 mm × 10 mm optical path length), cleaned with ethanol, and filled with argon. The PL signals emitted from the layers on quartz were collected and focused on the input of a spectrograph (Acton SP2300, Princeton Instruments, 100 μm slit width, 50 lines/ mm grating blazed at 600 nm) with two 2 in. focal glass lenses (50 mm focal length). The PL signal of the UV-fused silica quartz substrates was verified to be negligible for both 267 and 532 nm excitations. In the case of photoexcitation at 532 nm, the PL signal was sent through a 570 nm long-pass filter to avoid the 532 nm light inevitably scattered by the sample to enter the streak camera setup. The slit in front of the photocathode of the streak camera was set at 180 μm, yielding a time resolution of 30 ± 1 ps at a time range of 5 (i.e., a time window of 2 ns) and 15 ± 1 ps at a time range of 3 (i.e., a time window of 200 ps). Prior to the time-resolved PL experiments, the spectral calibration was checked and adapted if necessary using a Hg/Ar calibration lamp (Oriel, LSP035). Furthermore, the PL spectra were corrected for the spectral sensitivity of the setup measured using a calibrated blackbody lamp (Ocean Optics, HL-2000). The time windows used were either 2 ns (i.e., time range of 5) or 600 ps (i.e., time range of 3). The PL decay was verified to remain constant during the integration time ( Figure S1), although the amplitude decreased over the course of hours. The open-source program Glotaran 38 was used to perform global analysis, analogous to our earlier work on the nature of PL in nanostructured TiO 2 26 and commonly used to account for the spectral overlap of coexisting species and to disentangle their individual spectra and dynamics. 39 The spectrotemporal PL behavior could be described with two pathways, with the exception for TiO 2 , where a description of three pathways is more accurate (see the Results and Discussion section). Initial fitting and determination of τ 1 and τ 2 (and possibly τ 3 ) values were realized with the data of a time range of 5. By fixing the value(s) of τ 2 (and τ 3 ), the value of τ 1 was determined more accurately using data in the time range of 3. To determine the final lifetime values, multiple iterations were performed until the point that the values stabilized (i.e., changed less than the error). ■ RESULTS AND DISCUSSION Material Characterization. XRD analysis confirms that the applied TiO 2 consists of a major portion of anatase and a minor portion of rutile (see Figure S2). Note that Degussa P25, a combination of roughly 80% anatase and 15% rutile (the remaining 5% can be attributed to an amorphous phase), 40 shows a higher photocatalytic activity than either pure rutile or pure anatase TiO 2 . 41 The PL spectra of anatase and rutile TiO 2 are known to differ, 42−44 while the interface of rutile and anatase was reported to promote light-induced charge separation and the photocatalytic activity. 45 Figure 1a presents the annular dark-field image obtained through scanning transmission electron microscopy of Ru/ TiO 2 , showing a nanocrystalline structure consisting of TiO 2 nanoparticles with a size of ∼20 nm. The TiO 2 surface is mostly decorated with 1−2 nm diameter Ru nanoparticles, with outliers at 0.5 and 3 nm; such small nanoparticles are often referred to as nanoclusters. 27 On SiO 2 , Ru nanoparticles are present with a size distribution ranging from 0.5 to 5 nm ( Figure S3). They are also less dispersed and form aggregates. XPS shows that the Ru nanoclusters consist of metallic Ru (ca. 40%) but are also partly oxidized ( Figure 1b and Table S1). Although it is hard to exactly elucidate the distribution between oxidized and reduced Ru, it is likely that exposure to air results in a partial oxidation of the surface. Thus, we postulate that the Ru nanoparticles are possibly deposited onto the TiO 2 or SiO 2 as tiny core−shell particles, with a metallic core and a thin oxidized shell. The Kubelka−Munk plots of TiO 2 , SiO 2 , Ru/TiO 2 , and Ru/ SiO 2 are shown in Figure S4. Analogous to other studies, 46 TiO 2 is only able to absorb light <400 nm. This agrees with literature values for a band gap of 3.0 eV for rutile and 3.2 eV for anatase. 46,47 Since SiO 2 is an insulator, a negligible signal is observed in the Kubelka−Munk plot. The Ru nanoparticles allow for visible light absorption of Ru/TiO 2 and Ru/SiO 2 . This absorption can originate from both metallic Ru and from RuO 2 . Very small (few nanometer diameter) metal nanoclusters are known to show molecular-type electronic transitions. 27 RuO 2 has a band gap of 2.3 eV, 48 which may be larger here due to the small diameters of the nanoparticles, likely giving rise to quantum confinement effects. Based on Figure S4, TiO 2 and Ru/TiO 2 are expected to absorb laser light with an excitation wavelength of 267 nm strongly and Ru/ SiO 2 mildly. Only the Ru nanoclusters and particles should be able to absorb the 532 nm laser light. Time-Resolved Photoluminescence (PL) Studies. To explore a potential role of UV versus visible photoexcitation in the charge carrier dynamics, as well as the occurrence of interfacial charge separation between the TiO 2 and the Ru nanoclusters, time-resolved photoluminescence (PL) studies were performed by excitation with 300 fs pulses with a center The PL spectrum at 50 ps is substantially blue-shifted compared to that at 250 ps, indicating a different physical origin of the first. The red shift continues from 250 ps to 1 ns although less substantial. The red-shifted PL spectra and the stronger bathochromic shift with time in the present work relative to our earlier study on nanoporous anatase TiO 2 in various aqueous solutions 26 are likely due to differences in the crystalline phase (see Figure S2 for XRD), preparation method, and/or environment. In our previous work, we assigned this bathochromic shift with time to electron diffusion from the TiO 2 bulk toward the surface. This process likely occurs through multiple trapping and detrapping of electrons that are relatively mobile and likely move via a hopping-type process. 49 During this process, they may recombine (non)radiatively with relatively immobile hole polarons. This directional electron diffusion can also explain the wavelength dependency in the PL decay observed (Figure 2c). The PL at the highest photon energies, presumably primarily originating from bulk recombination, likely decays the fastest due to electron diffusion competing with the PL, hence lowering the PL lifetime. On the contrary, electron diffusion close to the possibly deeper trap states close to or at the TiO 2 surface 26 is likely slower and therefore less competitive to radiative decay. This also explains why the red shift in the PL spectrum especially occurs at early times, as evident from, e.g., the spectra at 50 and 250 ps in Figure 2a. At 250 ps, a major fraction of the electrons have reached the TiO 2 surface, explaining the minor red shift from 250 ps to 1 ns and the appearance of a nondecaying component. The latter causes the background signal (before t = 0 ps) to increase due to the back sweep of the streak camera used for PL detection. With the time window of the synchroscan unit (2 ns), this leads to a nondecaying PL component in the near-IR that could not be resolved. Due to the very low intensity of the PL signal, measurements at a lower photoexcitation repetition rate with single photon counting detection are unfeasible and have therefore not been performed. If such experiments would be feasible, the absence of the streak camera back streak in single photon counting detection can be expected to slightly affect the slow decay above ca. 550 nm. In the case of >1−2 ns PL lifetimes, the back streak yields a slightly slower decay than reality. 50 However, the streak camera is perfectly suitable to catch the subnanosecond decay at higher photon energies (see Table 1 for lifetimes), and this will therefore not be affected. Even for the slowest PL decay observed for TiO 2 following a 267 nm excitation, the extrapolated PL at 12 ns relative to the maximum PL intensity observed around 500 nm is very weak, Table 1). Data around 532 nm have been removed because of the scattering of residual laser light, while potential PL < 350 nm was blocked by the two 2 in. glass lenses used for collecting the PL. Figure S5). At the other PL wavelengths, this percentage is lower. Charge accumulation due to long-lived carriers in deep trap states, which is hard to completely eliminate in a metal oxide semiconductor, is hence minor. Figure 2b shows the PL spectra for Ru/TiO 2 in Ar after a 267 nm excitation. Compared to TiO 2 (Figure 2a; see also Figure S7), the PL spectra are clearly blue-shifted. Considering the minor differences between the PL of Ru/SiO 2 and SiO 2 at a 267 nm excitation ( Figure S6), a major contribution of the Ru nanoclusters to the PL is unlikely under these conditions. As SiO 2 is a wide-band-gap semiconductor and does not absorb at 267 nm (see also Kubelka−Munk plots in Figure S4), the PL from the SiO 2 is likely a result of the sub-band-gap excitation followed by emission from trap states. The blue emission observed agrees with the earlier work, in which the PL was assigned to defects. 51 The minor red shift and broadening of the PL spectrum observed for Ru/SiO 2 compared to that for SiO 2 is likely a result of the impact of Ru nanoparticles on trap states in the SiO 2 , giving rise to the PL signal. The lack of substantial PL from the Ru nanoparticles upon UV excitation also agrees with the absence of a more intense PL signal for Ru/TiO 2 compared to that of TiO 2 . The PL spectra in Figure 2b are hence likely predominantly a result of TiO 2 photoexcitation. Interestingly, the PL spectra of Ru/ TiO 2 are blue-shifted relative to those of TiO 2 , and the bathochromic shift with time from ca. 435 to 460 nm is also largely reduced compared to bare TiO 2 (Figure 2a), indicating that Ru nanoclusters quench in particular the surface PL of TiO 2 . The absence of a significant difference in the PL intensity of Ru/TiO 2 relative to TiO 2 at shorter wavelengths excludes ultrafast (i.e., within the instrumental response time) interfacial photoinduced charge separation, although this may occur to some degree at a nanosecond time scale after photoexcitation for charge carriers that have succeeded to diffuse to the Ru/TiO 2 interface. Figure 2d shows the decay at the selected PL wavelengths. Again, a gradual increase in PL lifetime is observed with lowering the photon energy. Two important differences are noticeable relative to that of bare TiO 2 . First, the fast component especially pronounced at higher photon energies is absent, which is likely a result of the surface functionalization as discussed below. Also, the nondecaying component observed for TiO 2 at low photon energies is absent, likely as a result of Ru nanoclusters quenching the surface PL of TiO 2 . Upon switching the excitation wavelength from 267 to 532 nm, the PL behavior changes drastically. As can be expected, the illumination of TiO 2 only results in scattering of the 532 nm pulses and no detectable PL (see Figure S8). The strong PL signal centered around 590 nm observed for Ru/SiO 2 in Ar ( Figure S9a) decaying in a picosecond to nanosecond time window hence primarily originates from photoexcitation of the Ru nanoclusters, which are also responsible for the absorption of visible light ( Figure S4). Note that the illumination of SiO 2 at 532 nm does not give any detectable PL ( Figure S8). Figure S9b shows a weak wavelength dependency of the PL decay of Ru/SiO 2 , possibly due to some structural inhomogeneity. This PL behavior is in agreement with the literature on a few nanometer size metal nanoclusters, for which molecular-type electronic levels are well known. 22−24 For 2 nm diameter Ru nanoclusters, a broad PL band around 560 nm was reported, 52 while ca. 1.5 nm diameter Ru nanoclusters were observed to show a broad PL band around 460 nm. 53 In contrast, despite the absorption of the Ru nanoclusters at 532 nm ( Figure S4) and the PL observed in the present work on insulating SiO 2 ( Figure S6) and in earlier work for 1.5−2 nm diameter Ru nanoclusters in solution, 52,53 no PL could be detected for Ru/ TiO 2 . This striking difference indicates the PL quenching of the Ru nanocluster excited states, most likely by ultrafast interfacial charge separation with the TiO 2 . Forster-type resonance energy transfer 8 from the Ru nanoclusters toward the TiO 2 is unlikely because of the lack of spectral overlap. The occurrence of charge separation agrees with density functional theory studies, reporting photoinduced electron transfer from excited Ru nanoclusters into TiO 2 . 54 The present work shows that this light-induced interfacial charge separation process likely occurs within the instrumental response time of the streak camera, either during photoexcitation 55 of the Ru nanoclusters or shortly thereafter on a femtosecond to early picosecond time scale. Global analysis demonstrates that the spectrotemporal PL behavior is well described by a parallel decay model, analogous to our earlier work on nanostructured anatase TiO 2 in different aqueous solutions. 26 A parallel model instead of a sequential model has been chosen because of the full development of the PL signal within the instrumental response time and the absence of a subsequent increase in signal. Note that although this model is likely a simplification of the reality, it describes all PL data well as apparent from the fits included as lines in Figures 2, S5, and S8. Figure 3 presents the normalized decayassociated spectra (DAS) obtained from global analysis using a parallel decay model; that is, DAS1 decays with τ 1 , DAS2 decays with τ 2 , and (only for TiO 2 at a 267 nm excitation) DAS3 decays with τ 3 . Table 1 presents the minimum number Table 2. of parallel decay processes needed for a good fit and obtained lifetimes. The spectrotemporal behavior of TiO 2 at a 267 nm excitation is well described by a parallel model with three components, while for Ru/TiO 2 at these conditions, we only need two components, likely due to the TiO 2 surface PL quenching by the Ru nanoclusters. A good fit for the PL of Ru/ SiO 2 at a 532 nm excitation is obtained by using a parallel decay model with two components ( Figure S10). The obtained lifetimes (Table 1) are comparable to values in the literature for a few nanometer size Au nanoclusters. 56,57 An important question to answer is whether the DAS indeed consists of one component, i.e., it presents a single photophysical process, or whether a second (minor) component is present. This would be applicable in the case where the DAS corresponds to more than one photophysical decay process. Spectral deconvolution shows that the DAS are well described by Gaussian functions, with corresponding parameters presented in Table 2. For TiO 2 at a 267 nm excitation, both DAS2 and DAS3 are well described by single Gaussians, centered at 2.48 and 2.11 eV, respectively. DAS1 is predominantly described by a PL band centered at 2.55 eV and a shoulder (14%) of the 2.11 eV band. Similarly, for Ru/ TiO 2 at a 267 nm excitation, DAS1 can be deconvoluted into two Gaussians centered at 3.07 eV and a shoulder (21%) at 2.45 eV, while for DAS2, a single Gaussian centered at 2.71 eV is sufficient. For Ru/SiO 2 at a 532 nm excitation, DAS1 is well described by a single Gaussian centered at 2.11 eV, while DAS2 has in addition to this band a tail (25%) centered at 1.79 eV, which likely arises from some structural inhomogeneity. Discussion and Proposed Photophysical Models. In Figure 4, we propose photophysical models for the processes following photoexcitation of TiO 2 and Ru/TiO 2 , highlighting the differences between 267 and 532 nm excitations. The first mainly leads to photoexcitation of the TiO 2 , whereas photoexcitation of the Ru nanoclusters is minor or negligible under these conditions. Since the photon energy (4.64 eV) exceeds the TiO 2 band gap, photoexcitation initially leads to the generation of hot or nonthermalized electrons, which thermalize by electron−phonon coupling reported to occur in <50 fs. 58 The interaction of electrons with immobile hole polarons may lead to self-trapped excitons, although these have not been included in Figure 4 because of the <5% quantum yield of this process at room temperature. 59 During the 300 fs photoexcitation pulse, electrons and holes trapped in shallow bulk and surface traps are likely generated, 60,61 which can explain why for the spectral deconvolution of DAS1 of both TiO 2 and Ru/TiO 2 two Gaussians are needed (Table 2), indicating two physical origins of DAS1. The dominant Gaussian at the highest PL photon energy likely presents bulk charge recombination, and the second weaker Gaussian at the lowest PL photon energy presents surface charge recombination. As the mobility of electrons in TiO 2 is likely at least 10 times higher than that of holes, 62 photoexcitation can be expected to be mainly followed by the diffusion of electrons. Based on an electron diffusion coefficient of 2 × 10 −5 cm 2 /s for nanostructured TiO 2 , 63 a 1−2 ns diffusion time from the bulk toward the surface can be estimated. During this multiple (de)trapping process, 49 electrons likely gradually relax into deeper traps, 26 causing a bathochromic PL shift with time. The observation that the bathochromic shift in Figure 2a mainly occurs in the first 250 ps after excitation and slightly further from 250 ps to 1 ns indicates that this diffusion predominantly occurs within 250 ps and slightly beyond this time window. Based on this assignment and our earlier work, 26 we cautiously assign DAS2 to a decay process with intermediate behavior between bulk electron−hole recombination and recombination sensitive to surface termination (DAS3). The blue-shifted PL spectra for Ru/TiO 2 relative to TiO 2 , the diminished bathochromic shift with time, and the lack of a necessity to include DAS3 in the global analysis clearly demonstrate that the presence of Ru nanoclusters quenches especially the surface PL of the TiO 2 . The quenching of the low-energy PL of TiO 2 induced by the Ru nanoclusters can be assigned to several effects. First, the Ru nanoclusters may introduce new trap states within the TiO 2 band gap at or near the surface. 34,54,64 Second, the Ru nanoclusters may passivate existing TiO 2 surface trap states. For a (101) TiO 2 surface, deep electron and hole traps have been assigned to undercoordinated Ti 5c 3+ and O 2c − sites, on which the Ru nanoclusters can be expected to have a major impact. A third possibility is that an ultrathin RuO 2 shell around the Ru nanocluster (see the XPS analysis in Figure 1 and Table S1) accepts photoinduced holes from the TiO 2 , as reported earlier. 65−67 As holes in nanocrystalline TiO 2 are relatively immobile compared to electrons, 62 this process is likely most relevant for holes trapped at or close to the TiO 2 surface. The resulting low quantity of surface hole polarons will have consequences for electrons that succeed to diffuse from the bulk toward the TiO 2 surface, as they could not recombine (non)radiatively with trapped holes any longer. Considering the ultrathin RuO 2 shell around the Ru nanocluster (Figure 1), we expect that the latter scenario could play a significant role here, which can also explain the difference in τ 1 values between Ru/TiO 2 (733.9 ± 12.3 ps) and TiO 2 (25.5 ± 0.07 ps). A lower quantity of surface hole polarons for Ru/TiO 2 implies less surface electron−hole recombination competing with (non)radiative recombination in the bulk of the TiO 2 nanoparticle and therefore a longer τ 1 value. The longer τ 1 and τ 2 values observed here for Ru/TiO 2 compared to those for TiO 2 indicate that electron transfer from photoexcited TiO 2 into the Ru is less likely, as such an electron-transfer process can be expected to decrease τ 1 and τ 2 . At a 532 nm photoexcitation, the situation is entirely different. In this case, the Ru nanoclusters are mainly responsible for the emission observed on the insulating SiO 2 support ( Figure S9), and the PL is well described by a parallel model with two lifetimes (200.1 ± 1.2 and 985.3 ± 1.9 ps). This PL behavior likely originates from molecular-type LUMO and highest occupied molecular orbital (HOMO) electronic levels well known for a few nanometer size metal nanoclusters. 22,23 The PL spectrum agrees with earlier work on Ru nanoclusters, reporting a broad PL band around 560 nm for 2 nm diameter Ru nanoclusters. 52 The obtained PL lifetimes are also comparable to literature values for a few nanometer size Au nanoclusters. 56,57 The biphasic decay may originate from a distribution in Ru nanocluster diameters, oxidation states, nanocluster aggregation, and/or distance-dependent Forster resonance energy transfer between the Ru nanoparticles. 68 In contrast to present results on Ru/SiO 2 and earlier work on unsupported 1.5−2 nm Ru nanoclusters in solution, 52,53 the PL of Ru/TiO 2 is strongly quenched, indicating ultrafast interfacial charge separation following photoexcitation of the Ru nanoclusters. Based on the striking difference in PL quenching between the Ru nanoclusters on insulating SiO 2 and TiO 2 , we assume that the role of the thin RuO 2 shell likely present at the surface of the Ru nanoclusters (Table S1) is not the major factor in PL quenching. The RuO 2 shell is likely thin enough to allow charge tunneling 69,70 between the Ru core of the nanocluster and the TiO 2 substrate. Based on the UV−vis and PL spectra, the HOMO−LUMO energy gap of the Ru nanoclusters is estimated to equal ∼2.4 eV and likely depends on the diameter. Density functional theory studies on Ru 10 nanoclusters on anatase TiO 2 (101) immersed into water indicate that photoexcitation of the Ru nanocluster is followed by electron transfer into the TiO 2 . 54 Photoexcitation of 1−3 nm size Au nanoclusters 71 and 10 nm diameter Au nanoparticles 72,73 was also reported to result in electron transfer into TiO 2 . Based on these studies, we cautiously propose that photoinduced interfacial charge separation may occur by electron transfer from the LUMO of the Ru nanocluster, through the ultrathin RuO 2 shell, into the TiO 2 conduction band. The strong PL quenching indicates that the quantum yield for light-induced charge separation is likely close to unity. The nature of the charge-transfer process will depend on the Ru LUMO energy level, relative to the CB minimum of TiO 2 . In case the LUMO level is equal to or higher in energy, electron transfer from Ru into TiO 2 is allowed. Alternatively, hole transfer following photoexcitation of the Ru from the HOMO into, e.g., a surface trap state of the TiO 2 may occur. The distribution in Ru particle diameters and oxidation states may well result in a distribution in HOMO and LUMO energy levels and, as a result, alter the photoinduced interfacial charge-transfer mechanism. The major impact of UV versus visible photoexcitation on the interface processes uncovered in the present work has important consequences for the nanostructural design of Ru/ TiO 2 photocatalysts and the choice of illumination source. The light sources used in photocatalytic and thermal studies are diverse and typically range from a solar simulator to a Hg or Xe lamp or light-emitting diode (LED). 74 Importantly, the contribution of UV versus visible light varies for these sources, while the present work clearly demonstrates key differences. The TiO 2 surface PL quenching observed for a 267 nm excitation, likely due to the transfer of surface hole polarons into the RuO 2 , can be considered as a cocatalytic effect in which the surface oxidation of the Ru nanocluster or particle likely plays an important role. As a key process under these conditions involves the generation of mobile electrons in the TiO 2 nanoparticle bulk, which first need to diffuse toward the surface before utilization in a photocatalytic process is possible and during which process losses occur, this implies a relatively low quantum yield for light-induced charge separation. In contrast, illumination at 532 nm predominantly excites the Ru nanoclusters, which results in ultrafast charge separation with the TiO 2 with a likely close to unity quantum yield. Outcompeting intrinsic decay processes within metal nanoparticles by interfacial charge separation with a metal oxide semiconductor can be challenging, 18,75 although light-The Journal of Physical Chemistry C pubs.acs.org/JPCC Article induced interfacial charge separation could occur during photoexcitation via direct electron transfer. 76,77 The efficient charge separation observed in the present work likely results from the relatively slow molecular-type excited-state decay dynamics of the Ru nanoclusters (Table 1), enabling lightinduced interfacial charge transfer to outcompete intrinsic excited-state decay processes. To the best of our knowledge, this is the first time that time-resolved PL spectroscopy studies have been performed on Ru-loaded TiO 2 to elucidate the charge carrier mechanisms induced by light absorption. Considering the key role of the photoexcitation wavelength in the mechanism and quantum yield of interfacial charge separation unraveled in the present work is essential in the design of efficient Ru/TiO 2 photocatalysts. ■ CONCLUSIONS In this work, we have uncovered the light-induced processes for a few nanometer size Ru nanoclusters deposited onto nanocrystalline TiO 2 by time-resolved PL spectroscopy, with a major role of the photoexcitation wavelength in the mechanism and quantum yield of light-induced charge separation. The Ru nanoclusters cause (i) quenching of surface PL of TiO 2 following photoexcitation at 267 nm and (ii) show no PL when deposited on TiO 2 and excited at 532 nm, which in both cases can be explained by charge-transfer phenomena occurring at the Ru/TiO 2 interface. We anticipate the role of a thin RuO 2 shell in the phenomena upon a 267 nm excitation, whereas the Ru metal core plays an important role at a 532 nm excitation. Currently, we are expanding the time-resolved PL setup to investigate how in situ photothermal conditions, including a reductive gaseous atmosphere (affecting the Ru oxidation state and inducing the presence of molecular adsorbates), influence photoinduced interfacial charge separation, to develop a mechanistic understanding in the possible synergy of light and elevated temperature in photothermal catalysis. Normalized PL decay of Ru/SiO 2 in Ar recorded at the beginning of integration and after 3 h of illumination; XRD pattern of TiO 2 ; HAADF-STEM image and XPS spectrum of Ru/SiO 2 ; XPS analysis of Ru/TiO 2 and Ru/ SiO 2 ; Kubelka−Munk plots of TiO 2 , SiO 2 , Ru/TiO 2 , and Ru/SiO 2 ; PL spectra of SiO 2 and Ru/SiO 2 in Ar with λ exc. = 267 nm; comparison of PL spectra between TiO 2 and Ru/TiO 2 in Ar and SiO 2 and Ru/SiO 2 in Ar with λ exc. = 267 nm; number of photons detected for TiO 2 and SiO 2 in Ar with λ exc. = 532 nm; PL spectra of Ru/SiO 2 in Ar with λ exc. = 532 nm; and normalized DAS spectra of Ru/SiO 2 in Ar with λ exc. = 532 nm (PDF)
8,524.6
2023-07-15T00:00:00.000
[ "Chemistry", "Physics" ]
Muon g-2 and Hadronic Vacuum Polarization : Recent Developments We discuss various experiments on e+e− annihilation into hadrons relevant to the problem of the muon anomalous magnetic moment. They include a status of the ISR measurements of the e+e− → π+π− as well as studies of numerous hadronic final states in experiments with the CMD-3 and SND detectors at the VEPP-2000 e+e− collider. Introduction The anomalous magnetic moment of the muon is one of the most precisely known physical quantities.In 2006 the BNL E821 Collaboration published the final results of their measurement of the muon a μ ≡ (g μ − 2)/2 [1].Various calculations show that the Standard Model (SM) prediction is about 3.5 standard deviations below the experimental value [2,3].At the moment two new measurements of a μ , each aimed at four-fold increase of accuracy, are planned at Fermilab and J-PARC.If the central value of the experimental result is confirmed, the deviation between experiment and theory will reach 8-10 standard deviations unambiguously pointing to effects of New Physics. The theoretical prediction accuracy is currently limited by the uncertainties of the hadronic vacuum polarization extracted from the cross sections of e + e − annihilation into hadrons measured by a scan method at CMD-2 and SND detectors at the VEPP-2M and initial-state radiation (ISR) at BaBar (Fig. 1), see the review of e + e − experiments in Ref. [4]. a e-mail<EMAIL_ADDRESS>it is absolutely necessary to improve experimental accuracy, experiments on low-energy e + e − annihilation into hadrons are currently in progress in various centers. In addition to the low-energy measurements, other energy ranges are still of interest for a μ and particularly for the α(M 2 Z ) determination.Recently a new measurement of R was performed between 3.12 and 3.72 GeV in Novosibirsk using the KEDR detector [5].The achieved systematic uncertainty is 2.1% with a total uncertainty of 3.3%.The results are shown in Fig. 2. Analysis of the KEDR .R measurement at KEDR [5] scan between 1.9 GeV and J/ψ [6] is in progress while R measurements from 2 to 4.6 GeV with BESIII and further studies of the charmonium region with KEDR are also planned.Improvement to about 2% in total can be expected from joint efforts of BESIII and KEDR. 2 ISR measurements of e + e − → π + π − The process e + e − → π + π − is known to give the largest contribution to the leading-order hadronic term a LO,had μ , about 73%.Recent ISR measurements of this process substantially improved the accuracy of its cross section.The BaBar Collaboration used a data sample collected at the peak of the Υ(4S ) resonance to achieve the record precision of about 0.5% near the ρ meson peak [7,8]. KLOE was using ISR running at the φ meson peak and gradually increased the precision of their measurements to 0.7% [9][10][11].However, the results of KLOE and BaBar differ with the discrepancy reaching 5% in some energy regions, far beyond the declared precision. A very recent precise measurement has been performed by the BES Collaboration running at the ψ(3770) peak [12].Their ISR measurement reached precision of about 0.9%, see the cross section in Fig. 3. Comparison between the results of BESIII and those from other groups -SND [13], CMD-2 [14], BaBar and KLOE shows that there are local discrepancies between the data of BaBar, KLOE and BESIII.This can be seen from Figure 4, which illustrates how the results obtained by ISR look like in terms of the hadronic contribution to muon anomaly after integration.The BESIII result lies between those of BaBar and KLOE being somewhat closer to the latter.Obviously, new measurements with comparable or even better precision are needed. Experiments at VEPP-2000 In 2010 a new low-energy e + e − collider, VEPP-2000, was commissioned in Novosibirsk [15].In 2011 data taking started with two detectors, CMD-3 and SND.An integrated luminosity of ∼ 60 fb −1 was collected by each in 2011 -2013 in the center-of-mass (c.m.) energy range 320 -2000 MeV. Figure 5 illustrates accumulation of luminosity in various energy ranges.Lately analysis has been mainly focused on the c.m. energies above the φ meson.In particular, both groups made an attempt to improve the precision of various cross sections important for the muon anomaly problem [16].Measurements of an integrated luminosity at CMD-3 use two processes, e + e − → e + e − and e + e − → γγ, allowing a precision of ∼ 1% [17].At SND, events of large-angle Bhabha scattering are used to determine an integrated luminosity with a systematic accuracy of 2% [18]. CMD-3 declares an aggressive goal of reaching 0.35% accuracy in the measurement of the pion form factor, see their prelimiary results in Fig. 6 [19].Up to to the c.m.The cross section of the process e + e − → π + π − π 0 clearly shows two excitations of the ω(783) mesonω(1420) and ω(1650).On the contrary, the energy dependence of the process e + e − → π + π − η is dominated by the ρ(770) recurrencies -ρ(1450) and ρ(1700).In both cases the SND data from VEPP-2000 are consistent with the results from VEPP-2M and BaBar and have comparable precision.CMD-3 has also measured the cross sections of these processes with comparable precision, not shown here. CMD-3 and SND studied all three possible charge combinations of the six-pion production.The cross section in Fig. 8. shows the case when all pions are charged with the obvious dip around the N N threshold [21].Such behavior of the cross section has been observed before at BaBar [22] as well as in much earlier e + e − and photoproduction measurements, therefore suggesting a probable threshold effect due to the opening of the N N channel [23].Near the N N threshold the behavior of the cross section for the 2π + 2π − 2π 0 final state (see Fig. 9) is also irregular, but somewhat differs from that in the 3π + 3π − case [24]. Finally, SND has measured for the first time ever the cross section of e + e − → π + π − 4π 0 , where the size of the data sample does not allow any conclusions about the N N threshold, Fig. 10.These final states have interesting and rich dynamics which analysis should be performed simultaneously for all three final states.For example, the ρ f 0 (1370) intermediate mechanism can result in all three possible charge combinations of the six-pion state (the fully neutral 6π 0 state can not be produced in one-photon annihilation because of C-parity violation).There are also mechanisms which do not give all charged final states, e.g., ω3π or η3π.An additional problem is that such final states can have negative G-parity whereas normally sixpion states should have positive G-parity.CMD-3 continued studies of various processes with kaons in the final state using good K/π separation based on measuring dE/dx in the drift chamber.In Fig. 11 we show the cross section of the process e + e − → K + K − near the φ meson at CMD-3 (left) and in the whole energy range at SND (right) [25]. The CMD-3 group has also reported results on the cross section and dynamics of the K + K − π + π − final state, see Fig. 12.While the energy dependence of the cross section is clear and obviously shows the φ(1680) state, there is still a lot to be done about the dynamics.Using the missing mass method, CMD-3 has also measured the cross section of the process e + e − → K + K − η, Fig. 13.Work is in progress on the process e + e − → K + K − π 0 . SND has already published their measurements of the processes with only neutral particles in the final statee + e − → π 0 π 0 γ [26] and e + e − → ηγ [27].The latter is of particular interest since for the first time events of the process have been found above 1.4 GeV, see Fig. 14. A study of the nucleon form factors near threshold was continued.SND significantly improved the precision of σ(e + e − → nn) [28] compared to the previous results from FENICE [29], see Fig. 15. CMD-3 measured the cross section of the process e + e − → p p and made an attempt to extract the ratio of the electric and magnetic form factors based on the angular distribution of the final nucleons [30], see Fig. 16.Both detectors used an original method of Ref. [31] to measure the partial width of a strongly suppressed η → e + e − decay using the inverse process.CMD-3 reported an upper limit of Γ(η → e + e − ) < 0.0024 eV at 90% CL based on 2.69 pb −1 and one mode of η decay [32].SND used 2.9 pb −1 and five modes of η decay to improve it to < 0.0020 eV.Finally, they combined the data samples of CMD-3 and SND to find Γ(η → e + e − ) < 0.0011 eV at 90% CL [33] which is still about two orders of magnitude below the unitary bound.SND has also performed a feasibility study for a search for η → e + e − via e + e − → η and concluded that the only promising decay mode for that is η → 3π 0 [34].A dedicated two-week run with the luminosity expected at the c.m. energy around the η meson mass will allow to improve the existing limit [35].After upgrading the VEPP-2000 collider and commissioning the new injection complex the luminosity of the complex is expected to increase by an order of magnitude.Both detectors will run for another five years with a goal of collecting 1-2 fb −1 and increasing significantly the accuracy of all hadronic channels. Figure 15 . Figure 15.Cross section of the process e + e − → nn at SND[28]
2,326.8
2016-04-26T00:00:00.000
[ "Physics" ]
The teaching of physics at upper secondary school level: A comparative study between Indonesia and Ireland This study aims to investigate the teaching approaches taken by physics teachers in Indonesia and Ireland when teaching a module on Medical Physics in the classroom. Additionally, students’ attitudes to the module on Medical Physics were also explored. In particular, the views of these teachers toward inquiry based science education (IBSE) and direct instruction (DI) when implementing this module with students in the 14–16 age group were examined. Data were collected to investigate how teachers in the two countries used combinations of the IBSE and DI teaching approaches when teaching the module to their students. Arising out of the implementation of the module, it was hoped that the module would serve as a “hook” to interest students in physics by teaching topics in physics via real-life applications of physics. Thus, the attitudes of the students toward science on completion of the module were assessed. A total of 15 schools in Indonesia (402 students) and 15 schools in Ireland (263 students) participated in the project. Data were collected from the teachers and students using questionnaires. Among the findings were that while teachers in Ireland were unanimous in their agreement with the inclusion of IBSE activities in the lesson plans supplied, only 67% of the teachers in Indonesia agreed with the inclusion of these activities in the module. There was a strong relationship between the type of school and the students’ attitude toward the module. Students in the more academic type schools in both Ireland and Indonesia were less positive about the module. Among the problems highlighted by teachers in Indonesia was the lack of laboratory facilities. Also, students in both countries commented on the problems with terminology and literacy in general when studying physics. While the module brought out a positive response from students convincing them to continue with their study of physics at the upper secondary school level. Introduction In the 2012 program for international student assessment (PISA) test results, it was found that of the 65 countries that participated in the test, Indonesia ranked 60th in literacy skills, and 64th in mathematics and science (OECD, 2014). A similar pattern was observed for Indonesia in the PISA 2009 results. On the contrary, Ireland has seen considerable improvement in recent years where it is now ranked ninth out of 65 OECD countries for science, fourth out of 65 countries for reading, and 13th of the 65 OECD countries for mathematics (OECD, 2014). Arising these results, it was felt appropriate to carry out a comparative study between the two countries in order to investigate the issues involved in teaching science in very different environments. In comparing the teaching approaches adopted by teachers in the two countries, it was decided to investigate the different approaches to teaching physics using either an inquiry based inquiry based science education (IBSE) approach or a direct instruction (DI) approach. When discussing these contrasting teaching approaches, the inquirybased approach is often described in terms of being student-centered [Sweitzer and Anderson, 1983; American Association for the Advancement of Science (AAAS), 1990;National Research Council (NRC), 1996Alberts, 2008;Juntunen and Aksela, 2013;Jiang and McComas, 2015]. On the contrary, the direct instruction approach is often described in terms of a teacher-centered approach (McKeen, 1972;Peterson, 1979;Becher, 1980;Rosenshine, 1995;Cobern et al., 2010). However, as will be discussed in this paper, these two categories of IBSE and DI are part of a continuum or spectrum of teaching approaches. Some authors represent DI in terms of a traditional classroom setting where students are perceived as sitting in straight rows of desks and learning through rote memorization (Brown et al., 1982;Borko and Wildman, 1986;Brooks and Brooks, 1999). In this scenario, students are described as attentively listening to the teacher standing in front of the class to impart information and compliantly taking notes without necessarily interacting with the topic being taught. Direct instruction should not be confused with didactic teaching. Hattie (2009) discusses in detail the main characteristics of direct instruction and outlines them in terms of seven major steps as outlined in Table 1 (Hattie, 2009, pp.205-206). It is the above description of direct instruction that was adopted in this study, and which may be summarized as follows: "In a nutshell: The teacher decides the learning intentions and success criteria, makes them transparent to the students, demonstrates them by modeling, evaluates if they understand what they have been told by checking for understanding, and re-telling them what they have been told by tying it all together with closure" (Hattie, 2009, p. 206). The inquiry based science education approach is described as "the art of developing challenging situations in which students are asked to observe and question phenomena; pose explanations of what they observe; devise and conduct experiments in which data are collected to support or contradict their theories; analyze data; draw conclusions from experimental data; design and build models; or any combination of these" (Hattie, 2009). In the IBSE approach, students are described as being actively involved in their own learning with the teacher using student investigations and discussions to challenge the students to think about the work being undertaken. Many teachers will recognize the above descriptions as being at the extreme ends of the spectrum of teaching approaches and may see themselves as using various aspects of the two approaches in their everyday teaching to achieve the learning outcomes of the lesson. In this paper, we will investigate and discuss how teachers in Indonesia and Ireland used combinations of the IBSE and DI teaching approaches when teaching a module on Medical Physics to their students. Arising the implementation of the Medical Physics module, it is hoped that more students will be encouraged to undertake the study of physics at the senior high school level. 3. There is need to build commitment and engagement in the learning task. In the terminology of direct instruction, this is sometimes called a "hook" to grab a student's attention. The aim is to put students into a receptive frame of mind; to focus student attention on the lesson; to share the learning intentions. 4. There are guides to how the teacher should present the lesson-including notions such as input, modeling, and checking for understanding. Input refers to providing information needed for students to gain the knowledge or skill through lecture, film, tape, video, pictures, and so on. Modeling is where the teacher shows students examples of what is expected as an end product of their work. The critical aspects are explained through labeling, categorizing, and comparing to exemplars of what is desired. Checking for understanding involves monitoring whether students have "got it" before proceeding, it is essential that students practice doing it right, so the teacher must know that students understand before they start to practice. If there is any doubt that the class has not understood, the concept or skill should be re-taught before the practice begins. 5. There is the notion of guided practice. This involves an opportunity for each student to demonstrate his or her grasp of new learning by working through an activity to exercise under the teacher's direct supervision. The teacher moves around the room to determine the level of mastery and to provide feedback and individual remediation as needed. 6. There is the closure part of the lesson. Closure involves those actions or statements by a teacher that are designed to bring a lesson presentation to an appropriate conclusion; the part wherein students are helped to bring things together in their own minds, to make sense out of what has just been taught. "Any questions? No. OK let us move on" is not closure. Closure is used to cue students to the fact that they have arrived at an important point in the lesson or the end of a lesson, to help organize student learning, to help form a coherent picture, to consolidate, eliminate confusion and frustration, and so on, and to reinforce the major points to be learned. Thus, closure involves reviewing and clarifying the key points of a lesson, tying them together into a coherent whole, ensuring they will be applied by the student by ensuring they have become part of the student's conceptual network. 7. There is independent practice. Once students have mastered the content or skill, it is time to provide for reinforcement practice. It is provided on a repeating schedule so that the learning is not forgotten. It may be homework or group or individual work in class. It is important to note that this practice can provide for decontextualisation: enough different contexts so that the skill or concept in which it was originally learned. For example, if the lesson is about inference from reading a passage about dinosaurs, the practice should be about inference from reading about another topic such as whales. The advocates of direct instruction argue that the failure to do this seventh step is responsible for most student failure to be able to apply something learned. Frontiers in Education 03 frontiersin.org Thus, we first consider some aspects of students' attitude toward physics as a subject and then investigate the effect that the intervention package had on the attitude toward physics of the participating students. Students' attitude toward physics in school science The study of students' attitudes toward science is not a new topic in science education. For almost 50 years, hundreds of journal papers as well as reviews (Gardner, 1975;Schibeci, 1984;Simpson and Oliver, 1990;Crawley and Koballa, 1994;Osborne et al., 2003;Koballa and Glynn, 2007;Hofstein and Mamlok-Naaman, 2011;Bennett et al., 2013;Ültay et al., 2017, 2021Ültay and Alev, 2017a,b) and dissertations have been published at international level in the area of students' attitudes toward science. The concept of an attitude toward science is somewhat nebulous, often poorly articulated and not well understood (Osborne and Dillon, 2008). Considerable clarity was brought to the topic in the PISA 2012 project since when discussing the results of this project (PISA 2013) the area of students' attitudes toward science was discussed under four main headings: a. Support for scientific inquiry, i.e., do students value scientific ways of gathering evidence, thinking logically, and communicating conclusions? b. Self-belief as science students, i.e., what are students' appraisals of their own abilities in science? c. Interest in science, i.e., are students interested in science-related social issues, are they willing to acquire scientific knowledge and skills, and do they consider science-related careers? d. Responsibility toward resources and environments. Are students concerned about environmental issues? It is in part (c) above that the focus of this research took place, i.e., looking at the challenges involved in trying to improve students' attitudes toward science and increasing their interest in science. At the international level, the falling numbers choosing to pursue the study of physics at senior high school level (OECD, 2014) are mirrored in Indonesia and Ireland (Kompas, 2013;Hyland, 2014). Enhancing a positive attitude toward science lessons is essential for two reasons: (a) students' attitudes and their academic performance are closely related and (b) attitudes may be used to forecast students' behavior in encouraging them to choose to continue with their study of physics (Glasman and Albarracín, 2006;Cheung, 2009). The subject of Physics presents particular difficulties for students as they encounter problems related to the use of mathematical equations and the manipulation of mathematical data (Angell et al., 2004;Ornek et al., 2007;Collins, 2011). This results in many concepts and principles of physics being difficult to understand. Hence, the interest of students in studying physics is adversely affected. Of the several factors that can affect students' interest in science, especially in the area of Physics, the approach to teaching that is adopted by the teacher is one of fundamental importance (Wellington and Ireson, 2008). We now focus this approach in terms of the two main sub-divisions, i.e., inquiry-based science education and direct instruction. The balance of inquiry-based science education and direct instruction As previously mentioned, some authors have put forward the idea that direct instruction represents an undesirable form of teaching and interpret the term "direct instruction" as didactic teaching. Direct instruction has been described as "authoritarian" (McKeen, 1972), "regimented" (Borko and Wildman, 1986), "fact accumulation at the expense of thinking skill development" (Edwards, 1981), and "focusing upon tests" (Nicholls, 1989). Direct instruction has also been portrayed as a "passive" mode of teaching (Becher, 1980). Direct instruction has been described as the "pouring of information from one container, the teacher's head, to another container, the student's head" (Brown and Campione, 1990). All of these critics of direct instruction are proposing that teachers use forms of "studentcentered" or activity-based instruction in place of direct instruction. Many educators feel that inquiry instruction rather than direct instruction is mostly in keeping with the widely accepted constructivist theory of how people learn, i.e., that meaningful knowledge cannot simply be transmitted and absorbed but learners have to construct their own understanding (Anderson 2002;Cobern et al., 2010). Some studies have found a positive effect of IBSE [e.g., Bredderman, 1985;National Research Council (NRC), 1996, 2005Donnelly et al., 2014;Ireland et al., 2014]. Other researchers have found a negative effect of IBSE, e.g., Buntern et al. (2014) argued that IBSE leads to high cognitive load and is thus not effective in the classroom. On another side, Arnold et al. (2014) argue that direct instruction cannot embrace the complex nature of scientific reasoning in an authentic fashion (Chinn and Malhotra, 2002) nor is it consistent with the constructivist views of learning (Hmelo-Silver et al., 2007). One of the big challenges facing teachers is in deciding when to use IBSE, when to give support and when to hold back information in order to maintain authentic inquiry settings, especially in upper secondary school (Crawford, 2000;Furtak, 2007). Wiggin and McTighe call it the dilemma of "direct instruction versus constructivist approach" (Wiggins and McTighe, 2005). Educators have been indoctrinated with the mantra "constructivism good, direct instruction bad" (Hattie, 2009). Colburn (2000) stated perhaps that one source of confusion about inquiry based science education is that it is only for "advanced" students. This is a misconception as all students can achieve success if teachers guide them toward understanding by implementing different activities in the classrooms. However, there are many times when inquiry-based science education may be less advantageous than other methods. It depends on our experiences as teachers to find the right balance between inquiry and non-inquiry methods that engages our students in their study of science (Gagne, 1963). In addition, Kennedy (2013) argues that "one of the clear outcomes from the research literature is that IBSE approaches to science teaching do result in an increase in the interest levels of students in sciences. Based on the research evidence outlined in this paper, it does not seem wise to "put all our eggs in one basket" and promote IBSE as the only approach to effective science teaching. We need to get the right balance between the direct instruction and approach and the IBSE approach" (Kennedy, 2013). In most cases, it may be best for teachers to use a combination of approaches to ensure that the needs of all students in terms of knowledge, understanding, skills, attitudes, values, scientific literacy, and overall interest in science and science-related topics are met. The Table 2. The advantages and disadvantages of direct instruction as outlined in the literature are summarized in Table 3. Overview of the medical physics intervention package The Medical Physics module used in this research was designed to encourage an interest in physics among young students through a relevant hands-on interactive learning experience using many real life examples. The module offers an introduction to medical physics through investigative and cooperative learning experiences. The module is divided into five units (X-Rays, Ultrasound, Endoscopy, MRI & CT Scans, and Radioactivity) with the objectives of each units clearly stated at the beginning of each unit. Each unit focuses on basic physics concepts presented in a logical sequence with learning outcomes stated at the end of each unit. The Medical Physics module is designed to challenge and motivate students. Whereas each lesson can be taught in a single class Children can also use their natural activity and curiosity when learning about a new concept (Vandervoort, 1983;Dewey, 2008 (Eshach, 1997;Henderson and David, 2007). Careful planning and preparation is also required for adequate content information to be imparted to students, which makes it difficult for some science topic to be taught using the inquiry method (Robertson, 2007). Piaget, believes that as the child grows and his/her brain experiences intellectual development and he/she starts to construct mental structures through his interaction with the environment (Lawson and Renner, 1975). Science being a vast accumulation of discoveries must be transmitted through books, charts, tables etc. Therefore, a great deal of science content must be taught and education cannot possibly fulfill its obligation by simply arranging for rediscovery (Skinner, 1987). Inquiry teaching methods does not provide for much adult support. The child always needs the support of an adult (Beliavsky, 2006). (Mason, 1963). It is possible for students to forget facts if rote memorization is a method of imparting information. Dewey was disturbed to see rote memorization and mechanical routine practices in science classroom (Vandervoort, 1983). Teachers prefer to use direct instructions because this is the most organized way of teaching (Qablan et al., 2009). The danger with this practice is that there is no foundation of knowledge built which the students can draw from in the event that he/she forgets the memorized knowledge. Their process skills and abilities to make judgment would not have been significantly developed (Vandervoort, 1983;Wang and Staver, 1995). Teachers find it hard to keep students motivating as they are left by themselves to acquire knowledge through inquiry-based learning (Bencze, 2009). With direct instruction, the teacher poses the problem and may then solves it without giving the students an opportunity to discover. Therefore the child is not given an opportunity to use the necessary process skills (Ray, 1961). Children receive more guidance as teachers make sure that students have understood the step before moving on to the next (Skinner, 1987 (Robertson, 2007). This method is accepted and promoted in many cultures and languages (Lee, 2002). Frontiers in Education 05 frontiersin.org (40 min), it is recommended that, if possible a double lesson (80-90 min) be devoted to each lesson in order to allow time for discussion and other activities. The module encourages a teaching approach involving a balance between inquiry based science education and direct instruction approaches. These approaches are encouraged by the inclusion of a detailed Teacher's Guide and a wide variety of student activities to encourage IBSE. Practical work activities are included throughout the module. These practical activities are used to model scientific principles as applied to medical physics. Expert Group Tasks are included in the module and are designed to encourage IBSE. The students work collaboratively and prepare presentations for the rest of the class. In addition this module is designed for teaching using an integrated IBSE-DI approach in each lesson. Methods This study involved a case study comparative research approach using qualitative method. Also, some aspects of action research were involved as feedback from the schools involved in the implementation of the module was used to incorporate modifications in the module for implementation with schools that will participate in future trials. Due the fact that the main language of the target sample is both native English speakers and native Indonesian speakers, the teaching package was translated from English into Indonesian by the researcher. A total of 34 teachers received in-service training on the module. Of these, 15 schools in Indonesia and 15 schools in Ireland were selected to participate in the project using random sampling. In Indonesia, the researcher took samples from three different school types, i.e., Madrasah secondary schools which are equivalent to the voluntary schools in Ireland, general secondary schools in Indonesia which are equivalent to community/comprehensive schools in Ireland, and vocational secondary schools in Indonesia which are equivalent to Education and Training Board (ETB) schools in Ireland. Circulars were distributed to schools and teachers were invited to attend training workshops to familiarize them with the teaching package. Trialing was carried out by seven schools in each country, and this helped to "fine-tune" the teaching package. No major modifications were necessary. In general, there were over 5.1 million secondary school pupils enrolled in Indonesia. 26,000 secondary schools exist. The Ministry of Education and Culture oversees 84% of these schools, and the Ministry of Religious Affairs oversees the remaining 16%. In Indonesia, high school lasts 3 years to complete. Indonesians have access to pre-professional and vocational high schools in addition to traditional high schools. In Indonesia, attending elementary through high school is required (Pambudi and Harjanto, 2020;Setiawan, 2020). In comparison, there are roughly 395,611 secondary school students in 3,968 secondary schools in Ireland. Dublin is the largest province, accounting for 18% of the market (706 Secondary schools). With 428 secondary schools (11%), Cork comes in second. Galway also has 233 secondary schools, which is a lot. Together, these three provinces account for 34% of the market for Irish secondary schools (Coolahan, 1995). For this study purposes, a total of 402 students in the 14-16 age group from Indonesia and 263 students from Ireland participated randomly. The smaller number of students in Ireland was due to the fact that many transition year students (age 15-16) were involved in work experiences programs and therefore were unable to participate in the project. Teachers were supplied with the module as a teaching package and were given complete freedom in how they wished to implement it in the classroom. Questionnaires were issued for completion by teachers and students. In this study, all questionnaires were distributed to the teachers in in-service training courses and returned to the researcher via the postal system. The response rate was 100%. The data were analyzed using quantitatively and qualitatively. Triangulation was carried out by comparing data obtained from the students about each lesson with descriptions from teachers on how they taught the lesson. Response of teachers to the medical physics intervention package The questionnaire issued to teachers ranged over a number of areas, e.g., type of school, size of the school, subject specialism in degree, teaching experience, gender, time spent implementing the intervention package, the assistance obtained from level of detail in objectives, learning outcomes and lesson plans, and the Teacher's Guide. Teachers were asked about their use of IBSE and DI when implementing the intervention package in the classroom. In this paper, we concentrate on the teachers' responses to the questions relating to IBSE and DI. When the Irish science teachers were asked their opinion about the inclusion of inquiry based science education activities in each of the lesson plans provided, all of the teachers (100%) agreed that it was a good idea to include these activities. Typical responses were: ▪ Yes, IBSE results in greater student engagement. ▪ Allowed students the opportunity to think/reflect on their own knowledge. ▪ Inquiry based science education is an advanced approach. Students questioning, researching, thus enhancing their communication skills; solving problems or creating solutions. Also encourages student "thinking" visible to the center of the learning. So, it is a good idea to included IBSE activities. This result compares with Indonesian teachers' responses where only 67% of the science teachers had a positive response to the inclusion of IBSE activities in the lesson plans provided. Interestingly, 33% of science teachers argued that it wasn't a good idea to put IBSE activities in the learning process. This approach is designed for students of high ability; most science teachers have difficulties implementing this method of teaching. This approach takes too much time. This method of teaching is difficult to implement and difficult to design assessment for it. There is a lack of laboratory equipment, administrative support and school facilities to help me to use this approach. Frontiers in Education 06 frontiersin.org Clearly, the majority of the sample of teachers in both countries expressed a positive attitude toward IBSE. However, it is clear that in the case of a significant number of science teachers in Indonesia, the perception that IBSE was only for higher ability students and the lack of laboratory facilities were clearly seen as an impediment to the teachers in implementing an IBSE approach due the fact that some of the practical activities could not be carried out. Some interesting points of agreement were observed between the science teachers in Ireland and in Indonesia when asked about the balance between IBSE and DI in their teaching for the lessons in the intervention package. The results are summarized in Figure 1. Clearly, the comparative analysis above showed that approximately 40% of teachers in both countries reported that the balance of IBSE and DI was in the ratio of 50:50 DI. It is also worth noting that a significant number of teachers in Ireland (41%) and in Indonesia (29%) felt that that that a balance between IBSE and DI was in the ratio of 1:3. Some typical comments obtained were: "In my classroom, I tried to teach with more emphasizes on inquiry based learning, but it needs more time allocated. Comparing IBSE with DI is a good idea. " "I think there are many reasons why the balance should be 50% IBSE-50% DI": (1). The number of students in the classroom, (2). Laboratory equipment, (3). Students' abilities, (4). Allocation of learning time. " When teachers were asked to comment on the benefits that they saw of IBSE and DI approaches to teaching, a wide variety of comments were received. These comments are summarized in Tables 2, 4. As seen from the above summary of the data obtained, the science teachers do not believe that there is any one perfect teaching approach to implementing the intervention package. There appears to be a continuum of a shifting balance (dynamic equilibrium) between student-centered learning (inquiry based science education) and teacher-centered learning (direct instruction) to ensure that these two approaches complement each other. Response of students to the medical physics module The questionnaire issued to students asked their views on a number of areas, e.g., (1) gender, (2) age, (3) type of school, (4) level of interest in science, (5) performance in past science examinations, (6) level of difficulty in understanding topics, (7) participation in group activities, (8) level of interest in topics covered in module, and (9) willingness to continue with their study of physics. Due to restrictions on space, in this paper, we concentrate on the students' responses about their level of enjoyment of the module and their interest in the study of physics. A detailed analysis of all the data is given elsewhere (Sudirman, 2016). Students were asked to indicate their level of enjoyment on a five-point Likert scale ranging from "extremely unenjoyable" to "extremely enjoyable." The results are summarized in Figure 2. It is clear that the majority of the students in both countries reported that they found the module enjoyable. Typical comments received from those who found the module enjoyable were: • It was really enjoyable to learn about different topics in physics. • It was interesting and helped further my studies. • I really enjoyed it because it shows how medical analysis works. • Not my favorite topic but it was a good lesson to know in general. • I thought the lessons were enjoyable. I participated in the expert group task and my role was as a speaker when my group presented our research project. It is clear from Figure 2 that while Irish students showed a higher level of enjoyment of the module than Indonesian students, overall, the majority of students reported that they enjoyed the module. A statistical analysis of the data was carried out and some interesting points emerged: • In Ireland male students were more interested in the module than female students but in Indonesia female students were more interested in it than male students. • In both Ireland and Indonesia, students in vocational type schools expressed the most positive attitude toward the module. Students in the more academic type schools were less positive about the module. When the Irish students were asked if the study of this module would encourage them to continue with their study of physics at senior level (Leaving Certificate), 45% indicated "yes" while more than half (55%) said "no. " Interestingly, a significant number of Indonesian students (78%) said "yes" while only 22% said "no. " Some typical comments from students were: Science = awesome, Science makes me happy to further…it increased my curiosity and interest in physics. I actually like physics, but for the next year I will choose the Social Sciences Program which does not include physics (compulsory). Maybe if I were allowed to choose physics, I would also choose it. Comparative analysis regarding the reported balance between inquiry based science education (IBSE) and direct instruction (DI) during the teaching process. Frontiers in Education 07 frontiersin.org The latter quotation above points to the fact that in Indonesia it is not possible to choose to study science subjects if one is specializing in subjects that are part of the Social Sciences program. This problem does not arise in Ireland where students study a total of seven subjects which include both social science subjects and science subjects. Analysis of the comments from the students in both countries revealed that some terms used in this module affected their interest due to a lack of literacy skills. The study shows that students had difficulty not only with the technical words, but more commonly with everyday words used in the module. It would appear from the analysis of the student questionnaires that some of the teachers did not explain the meaning of many of the common terms encountered in the module as they may have assumed that they were understood by the student. This is in keeping with the findings of Cassells and Johnstone (1985) and Wellington and Osborne (2001). The use of the DI approach during the teaching process has clear significance for helping students to overcome literacy problems. Without an emphasis on supporting literacy, students may become frustrated with the problems being encountered and this may contribute to develop a negative attitude due to the difficulty of understanding the subject matter. Conclusion and recommendation Analysis of the data obtained from teachers and students clearly shows that the Medical Physics module has been successful in generating positive responses from both teachers and students. There is a statistically significant difference of responses regarding some variables in the module between Indonesian and Irish science teachers as well as in the responses from students. Analysis of the data from teachers and students shows that the teaching package was teacherfriendly, clear and concise, well laid out, and easy to follow. Teachers reported that the various methodologies and strategies used in the package were popular and could be easily adapted and modified for use in secondary school science lessons. Based on the findings of the study that arise from the data analysis and bearing in mind its implications the following recommendations are confirmed. (1) There are some clear implications arising from this study for policymakers. Policymakers refer to those involved in curriculum design, members of the inspectorate, and other government agencies whose responsibility involves guiding the future of science education. Policymakers must ensure that continuing professional development programs for science teachers are provided to help them to develop a balance in their teaching between inquiry based science education and direct instruction. Also, teachers were clearly happy with the clearly defined learning outcomes for each lesson in the module. Hence, it is important to provide training to science teachers in the writing of learning outcomes and the methodology involved in teaching within a learning outcomes framework. While the concept of learning outcomes is well known in Ireland, the concept of a learning outcomes framework is quite new to teachers in Indonesia. (2) The availability of suitable laboratory facilities is very important in supporting an effective science teaching and learning environment. Policymakers at national and local level could better address the needs of science teachers in schools in terms of providing better quality laboratory facilities-this problem was particularly acute in Indonesia in promoting IBSE. (3) The Medical Physics module has received strong positive response from both IBSE DI Engages students and provides a greater cognitive challenge, i.e., scientific attitude and scientific process. Teachers are able to guide students in face-to-face teaching and maximize students' understanding. Students work independently. Can be adapted for the complete range of students' abilities. Can serve the needs of students who have above average ability. That is, students who have good ability and good study skills. Can determine what the students need in facing difficulties in understanding. IBSE is a teaching strategy that emphasizes the development of a balance between the cognitive, affective, and psychomotor domains. Creates an interactive learning environment, particularly for students with lower abilities. Allows students to understand the scientific process. Listening activities play a key role for success in implementing the DI approach. The teacher identifies the depth of students' knowledge and understanding of the concepts being discussed. Can be used to determine the important points or difficulties that may be faced by students. Pace and content can be adapted to suit individual learning needs of students and also helps develop critical thinking skills. The most effective way to teach concepts and skills to students who are underachieving. Allows students to think more critically about the topic being explored. Teachers can demonstrate how a problem can be approached, how the information is analyzed, and how the knowledge is generated. Provides a space for students to learn according to their learning styles. It makes learning science interesting and relevant to students' everyday life by establishing a direct link between theories and its application. Focuses the students' attention on relevant content. FIGURE 2 Responses of students indicating their level of enjoyment of the module. Frontiers in Education 08 frontiersin.org teachers and students. Similar modules could be devised such as in astronomy, biotechnology, electronics, and other areas. Learning physics in the context of applications of science and technology allied with good pedagogy can create a good learning atmosphere. While the students enjoyed studying the module, it had limited success in convincing them to continue with their study of physics at a higher level. The response of the teachers showed that there was a good balance between IBSE and DI in the teaching approach used by teachers when implementing this module. It is hoped that the study presented here will contribute to the development of new and innovative ways of teaching physics at the secondary school level. Data availability statement The data that support the findings of this study are available from the corresponding author. Ethics statement Ethical approval was not required for the study involving human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants in accordance with the national legislation and the institutional requirements. Author contributions SuS conceived of the presented idea, developed the theory, verified the analytical methods, and performed the writing-original draft and conceptualization. DK supervised the findings of this work. SoS performed writing-review and editing. All authors contributed to the article and approved the submitted version. Funding The project was funded by the Ministry of Finance, the Republic of Indonesia through the Indonesia Endowment Fund for Education (LPDP Scholarship).
8,254.4
2023-03-13T00:00:00.000
[ "Physics", "Education" ]